test_: Code Migration from status-cli-tests

author shashankshampi <shashank.sanket1995@gmail.com> 1729780155 +0530
committer shashankshampi <shashank.sanket1995@gmail.com> 1730274350 +0530

test: Code Migration from status-cli-tests
fix_: functional tests (#5979)

* fix_: generate on test-functional

* chore(test)_: fix functional test assertion

---------

Co-authored-by: Siddarth Kumar <siddarthkay@gmail.com>

feat(accounts)_: cherry-pick Persist acceptance of Terms of Use & Privacy policy (#5766) (#5977)

* feat(accounts)_: Persist acceptance of Terms of Use & Privacy policy (#5766)

The original GH issue https://github.com/status-im/status-mobile/issues/21113
came from a request from the Legal team. We must show to Status v1 users the new
terms (Terms of Use & Privacy Policy) right after they upgrade to Status v2
from the stores.

The solution we use is to create a flag in the accounts table, named
hasAcceptedTerms. The flag will be set to true on the first account ever
created in v2 and we provide a native call in mobile/status.go#AcceptTerms,
which allows the client to persist the user's choice in case they are upgrading
(from v1 -> v2, or from a v2 older than this PR).

This solution is not the best because we should store the setting in a separate
table, not in the accounts table.

Related Mobile PR https://github.com/status-im/status-mobile/pull/21124

* fix(test)_: Compare addresses using uppercased strings

---------

Co-authored-by: Icaro Motta <icaro.ldm@gmail.com>

test_: restore account (#5960)

feat_: `LogOnPanic` linter (#5969)

* feat_: LogOnPanic linter

* fix_: add missing defer LogOnPanic

* chore_: make vendor

* fix_: tests, address pr comments

* fix_: address pr comments

fix(ci)_: remove workspace and tmp dir

This ensures we do not encounter weird errors like:
```
+ ln -s /home/jenkins/workspace/go_prs_linux_x86_64_main_PR-5907 /home/jenkins/workspace/go_prs_linux_x86_64_main_PR-5907@tmp/go/src/github.com/status-im/status-go
ln: failed to create symbolic link '/home/jenkins/workspace/go_prs_linux_x86_64_main_PR-5907@tmp/go/src/github.com/status-im/status-go': File exists
script returned exit code 1
```

Signed-off-by: Jakub Sokołowski <jakub@status.im>

chore_: enable windows and macos CI build (#5840)

- Added support for Windows and macOS in CI pipelines
- Added missing dependencies for Windows and x86-64-darwin
- Resolved macOS SDK version compatibility for darwin-x86_64

The `mkShell` override was necessary to ensure compatibility with the newer
macOS SDK (version 11.0) for x86_64. The default SDK (10.12) was causing build failures
because of the missing libs and frameworks. OverrideSDK creates a mapping from
the default SDK in all package categories to the requested SDK (11.0).

fix(contacts)_: fix trust status not being saved to cache when changed (#5965)

Fixes https://github.com/status-im/status-desktop/issues/16392

cleanup

added logger and cleanup

review comments changes

fix_: functional tests (#5979)

* fix_: generate on test-functional

* chore(test)_: fix functional test assertion

---------

Co-authored-by: Siddarth Kumar <siddarthkay@gmail.com>

feat(accounts)_: cherry-pick Persist acceptance of Terms of Use & Privacy policy (#5766) (#5977)

* feat(accounts)_: Persist acceptance of Terms of Use & Privacy policy (#5766)

The original GH issue https://github.com/status-im/status-mobile/issues/21113
came from a request from the Legal team. We must show to Status v1 users the new
terms (Terms of Use & Privacy Policy) right after they upgrade to Status v2
from the stores.

The solution we use is to create a flag in the accounts table, named
hasAcceptedTerms. The flag will be set to true on the first account ever
created in v2 and we provide a native call in mobile/status.go#AcceptTerms,
which allows the client to persist the user's choice in case they are upgrading
(from v1 -> v2, or from a v2 older than this PR).

This solution is not the best because we should store the setting in a separate
table, not in the accounts table.

Related Mobile PR https://github.com/status-im/status-mobile/pull/21124

* fix(test)_: Compare addresses using uppercased strings

---------

Co-authored-by: Icaro Motta <icaro.ldm@gmail.com>

test_: restore account (#5960)

feat_: `LogOnPanic` linter (#5969)

* feat_: LogOnPanic linter

* fix_: add missing defer LogOnPanic

* chore_: make vendor

* fix_: tests, address pr comments

* fix_: address pr comments

chore_: enable windows and macos CI build (#5840)

- Added support for Windows and macOS in CI pipelines
- Added missing dependencies for Windows and x86-64-darwin
- Resolved macOS SDK version compatibility for darwin-x86_64

The `mkShell` override was necessary to ensure compatibility with the newer
macOS SDK (version 11.0) for x86_64. The default SDK (10.12) was causing build failures
because of the missing libs and frameworks. OverrideSDK creates a mapping from
the default SDK in all package categories to the requested SDK (11.0).

fix(contacts)_: fix trust status not being saved to cache when changed (#5965)

Fixes https://github.com/status-im/status-desktop/issues/16392

test_: remove port bind

chore(wallet)_: move route execution code to separate module

chore_: replace geth logger with zap logger (#5962)

closes: #6002

feat(telemetry)_: add metrics for message reliability (#5899)

* feat(telemetry)_: track message reliability

Add metrics for dial errors, missed messages,
missed relevant messages, and confirmed delivery.

* fix_: handle error from json marshal

chore_: use zap logger as request logger

iterates: status-im/status-desktop#16536

test_: unique project per run

test_: use docker compose v2, more concrete project name

fix(codecov)_: ignore folders without tests

Otherwise Codecov reports incorrect numbers when making changes.
https://docs.codecov.com/docs/ignoring-paths

Signed-off-by: Jakub Sokołowski <jakub@status.im>

test_: verify schema of signals during init; fix schema verification warnings (#5947)

fix_: update defaultGorushURL (#6011)

fix(tests)_: use non-standard port to avoid conflicts

We have observed `nimbus-eth2` build failures reporting this port:
```json
{
  "lvl": "NTC",
  "ts": "2024-10-28 13:51:32.308+00:00",
  "msg": "REST HTTP server could not be started",
  "topics": "beacnde",
  "address": "127.0.0.1:5432",
  "reason": "(98) Address already in use"
}
```
https://ci.status.im/job/nimbus-eth2/job/platforms/job/linux/job/x86_64/job/main/job/PR-6683/3/

Signed-off-by: Jakub Sokołowski <jakub@status.im>

fix_: create request logger ad-hoc in tests

Fixes `TestCall` failing when run concurrently.

chore_: configure codecov (#6005)

* chore_: configure codecov

* fix_: after_n_builds
This commit is contained in:
shashankshampi 2024-10-24 19:59:15 +05:30
parent 3179532b64
commit 14dcd29eee
403 changed files with 33057 additions and 2253 deletions

View File

@ -4,30 +4,38 @@
codecov: codecov:
require_ci_to_pass: false require_ci_to_pass: false
notify: notify:
wait_for_ci: true wait_for_ci: false
after_n_builds: 2
ignore:
- "_.*"
- "vendor"
- "scripts"
- "contracts"
- "Makefile"
coverage: coverage:
status: status:
project: project:
default:
informational: true
unit-tests: unit-tests:
target: auto target: auto
threshold: 1
flags: flags:
- unit - unit
functional-tests: functional-tests:
threshold: 0.1
target: auto target: auto
flags: flags:
- functional - functional
patch: patch:
default: default:
informational: true target: 50
unit-tests: unit-tests:
target: auto informational: true
flags: flags:
- unit - unit
functional-tests: functional-tests:
target: auto informational: true
flags: flags:
- functional - functional
@ -39,7 +47,7 @@ flags:
functional-tests: functional-tests:
paths: paths:
- ".*" - ".*"
carryforward: true carryforward: false
comment: comment:
behavior: default behavior: default

View File

@ -193,6 +193,11 @@ statusgo-cross: statusgo-android statusgo-ios
@echo "Full cross compilation done." @echo "Full cross compilation done."
@ls -ld build/bin/statusgo-* @ls -ld build/bin/statusgo-*
status-go-deps:
go install go.uber.org/mock/mockgen@v0.4.0
go install github.com/kevinburke/go-bindata/v4/...@v4.0.2
go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.34.1
statusgo-android: generate statusgo-android: generate
statusgo-android: ##@cross-compile Build status-go for Android statusgo-android: ##@cross-compile Build status-go for Android
@echo "Building status-go for Android..." @echo "Building status-go for Android..."
@ -398,6 +403,7 @@ test-e2e: ##@tests Run e2e tests
test-e2e-race: export GOTEST_EXTRAFLAGS=-race test-e2e-race: export GOTEST_EXTRAFLAGS=-race
test-e2e-race: test-e2e ##@tests Run e2e tests with -race flag test-e2e-race: test-e2e ##@tests Run e2e tests with -race flag
test-functional: generate
test-functional: export FUNCTIONAL_TESTS_DOCKER_UID ?= $(call sh, id -u) test-functional: export FUNCTIONAL_TESTS_DOCKER_UID ?= $(call sh, id -u)
test-functional: export FUNCTIONAL_TESTS_REPORT_CODECOV ?= false test-functional: export FUNCTIONAL_TESTS_REPORT_CODECOV ?= false
test-functional: test-functional:
@ -407,7 +413,10 @@ canary-test: node-canary
# TODO: uncomment that! # TODO: uncomment that!
#_assets/scripts/canary_test_mailservers.sh ./config/cli/fleet-eth.prod.json #_assets/scripts/canary_test_mailservers.sh ./config/cli/fleet-eth.prod.json
lint: generate lint-panics: generate
go run ./cmd/lint-panics -root="$(call sh, pwd)" -skip=./cmd -test=false ./...
lint: generate lint-panics
golangci-lint run ./... golangci-lint run ./...
ci: generate lint canary-test test-unit test-e2e ##@tests Run all linters and tests at once ci: generate lint canary-test test-unit test-e2e ##@tests Run all linters and tests at once

View File

@ -1,5 +1,5 @@
#!/usr/bin/env groovy #!/usr/bin/env groovy
library 'status-jenkins-lib@v1.9.6' library 'status-jenkins-lib@v1.9.12'
pipeline { pipeline {
agent { label 'linux' } agent { label 'linux' }
@ -52,6 +52,12 @@ pipeline {
stage('Linux') { steps { script { stage('Linux') { steps { script {
linux = jenkins.Build('status-go/platforms/linux') linux = jenkins.Build('status-go/platforms/linux')
} } } } } }
stage('MacOS') { steps { script {
linux = jenkins.Build('status-go/platforms/macos')
} } }
stage('Windows') { steps { script {
linux = jenkins.Build('status-go/platforms/windows')
} } }
stage('Docker') { steps { script { stage('Docker') { steps { script {
dock = jenkins.Build('status-go/platforms/docker') dock = jenkins.Build('status-go/platforms/docker')
} } } } } }

View File

@ -85,6 +85,9 @@ pipeline {
post { post {
success { script { github.notifyPR(true) } } success { script { github.notifyPR(true) } }
failure { script { github.notifyPR(false) } } failure { script { github.notifyPR(false) } }
cleanup { sh 'make deep-clean' } cleanup {
cleanWs()
dir("${env.WORKSPACE}@tmp") { deleteDir() }
}
} // post } // post
} // pipeline } // pipeline

View File

@ -0,0 +1,166 @@
#!/usr/bin/env groovy
library 'status-jenkins-lib@v1.9.12'
pipeline {
/* This way we run the same Jenkinsfile on different platforms. */
agent { label "${params.AGENT_LABEL}" }
parameters {
string(
name: 'BRANCH',
defaultValue: 'develop',
description: 'Name of branch to build.'
)
string(
name: 'AGENT_LABEL',
description: 'Label for targetted CI slave host.',
defaultValue: params.AGENT_LABEL ?: getAgentLabel(),
)
booleanParam(
name: 'RELEASE',
defaultValue: false,
description: 'Enable to create build for release.',
)
}
options {
timestamps()
ansiColor('xterm')
/* Prevent Jenkins jobs from running forever */
timeout(time: 15, unit: 'MINUTES')
disableConcurrentBuilds()
/* manage how many builds we keep */
buildDiscarder(logRotator(
numToKeepStr: '5',
daysToKeepStr: '30',
artifactNumToKeepStr: '1',
))
}
environment {
PLATFORM = getPlatformFromLabel(params.AGENT_LABEL)
TMPDIR = "${WORKSPACE_TMP}"
GOPATH = "${WORKSPACE_TMP}/go"
GOCACHE = "${WORKSPACE_TMP}/gocache"
PATH = "${PATH}:${GOPATH}/bin:/c/Users/jenkins/go/bin"
REPO_SRC = "${GOPATH}/src/github.com/status-im/status-go"
VERSION = sh(script: "./_assets/scripts/version.sh", returnStdout: true)
ARTIFACT = utils.pkgFilename(
name: 'status-go',
type: env.PLATFORM,
version: env.VERSION,
ext: 'zip',
)
/* prevent sharing cache dir across different jobs */
GO_GENERATE_FAST_DIR = "${env.WORKSPACE_TMP}/go-generate-fast"
}
stages {
stage('Setup') {
steps {
script {
if (env.PLATFORM != 'windows') {
sh "mkdir -p \$(dirname ${REPO_SRC})"
sh "ln -s ${WORKSPACE} ${REPO_SRC}"
}
}
}
}
stage('Deps') {
steps { script {
shell('make status-go-deps')
}
}
}
stage('Generate') {
steps { script {
shell('make generate')
}
}
}
stage('Build Static Lib') {
steps {
script {
shell('make statusgo-library')
}
}
}
stage('Build Shared Lib') {
steps {
script {
shell('make statusgo-shared-library')
}
}
}
stage('Archive') {
steps {
zip zipFile: "${ARTIFACT}", archive: true, dir: 'build/bin'
}
}
stage('Upload') {
steps {
script {
env.PKG_URL = s5cmd.upload(ARTIFACT)
}
}
}
stage('Cleanup') {
steps {
script {
cleanTmp()
}
}
}
} // stages
post {
success { script { github.notifyPR(true) } }
failure { script { github.notifyPR(false) } }
cleanup { cleanWs() }
} // post
} // pipeline
/* This allows us to use one Jenkinsfile and run
* jobs on different platforms based on job name. */
def getAgentLabel() {
if (params.AGENT_LABEL) { return params.AGENT_LABEL }
/* We extract the name of the job from currentThread because
* before an agent is picket env is not available. */
def tokens = Thread.currentThread().getName().split('/')
def labels = []
/* Check if the job path contains any of the valid labels. */
['linux', 'macos', 'windows', 'x86_64', 'aarch64', 'arm64'].each {
if (tokens.contains(it)) { labels.add(it) }
}
return labels.join(' && ')
}
/* This function extracts the platform from the AGENT_LABEL */
def getPlatformFromLabel(label) {
for (platform in ['linux', 'macos', 'windows']) {
if (label.contains(platform)) {
return platform
}
}
}
def shell(cmd) {
if (env.PLATFORM == 'windows') {
sh "${cmd} SHELL=/bin/sh"
} else {
nix.shell(cmd, pure: false) // Use nix.shell for Linux/macOS
}
}
def cleanTmp() {
if (env.PLATFORM == 'windows') {
sh "rm -rf ${env.WORKSPACE}@tmp"
} else {
dir("${env.WORKSPACE}@tmp") { deleteDir() }
}
}

View File

@ -89,6 +89,9 @@ pipeline {
post { post {
success { script { github.notifyPR(true) } } success { script { github.notifyPR(true) } }
failure { script { github.notifyPR(false) } } failure { script { github.notifyPR(false) } }
cleanup { sh 'make deep-clean' } cleanup {
cleanWs()
dir("${env.WORKSPACE}@tmp") { deleteDir() }
}
} // post } // post
} // pipeline } // pipeline

View File

@ -1,97 +0,0 @@
#!/usr/bin/env groovy
library 'status-jenkins-lib@v1.9.6'
pipeline {
agent { label 'linux && x86_64 && nix-2.19' }
parameters {
string(
name: 'BRANCH',
defaultValue: 'develop',
description: 'Name of branch to build.'
)
booleanParam(
name: 'RELEASE',
defaultValue: false,
description: 'Enable to create build for release.',
)
}
options {
timestamps()
ansiColor('xterm')
/* Prevent Jenkins jobs from running forever */
timeout(time: 10, unit: 'MINUTES')
disableConcurrentBuilds()
/* manage how many builds we keep */
buildDiscarder(logRotator(
numToKeepStr: '5',
daysToKeepStr: '30',
artifactNumToKeepStr: '1',
))
}
environment {
PLATFORM = 'linux'
TMPDIR = "${WORKSPACE_TMP}"
GOPATH = "${WORKSPACE_TMP}/go"
GOCACHE = "${WORKSPACE_TMP}/gocache"
PATH = "${PATH}:${GOPATH}/bin"
REPO_SRC = "${GOPATH}/src/github.com/status-im/status-go"
VERSION = sh(script: "./_assets/scripts/version.sh", returnStdout: true)
ARTIFACT = utils.pkgFilename(
name: 'status-go',
type: env.PLATFORM,
version: env.VERSION,
ext: 'zip',
)
/* prevent sharing cache dir across different jobs */
GO_GENERATE_FAST_DIR = "${env.WORKSPACE_TMP}/go-generate-fast"
}
stages {
stage('Setup') {
steps { /* Go needs to find status-go in GOPATH. */
sh "mkdir -p \$(dirname ${REPO_SRC})"
sh "ln -s ${WORKSPACE} ${REPO_SRC}"
}
}
stage('Generate') {
steps { script {
nix.shell('make generate', pure: false)
} }
}
/* Sanity-check C bindings */
stage('Build Static Lib') {
steps { script {
nix.shell('make statusgo-library', pure: false)
} }
}
stage('Build Shared Lib') {
steps { script {
nix.shell('make statusgo-shared-library', pure: false)
} }
}
stage('Archive') {
steps {
sh "zip -q -r ${ARTIFACT} build/bin"
archiveArtifacts(ARTIFACT)
}
}
stage('Upload') {
steps { script {
env.PKG_URL = s5cmd.upload(ARTIFACT)
} }
}
} // stages
post {
success { script { github.notifyPR(true) } }
failure { script { github.notifyPR(false) } }
cleanup { sh 'make deep-clean' }
} // post
} // pipeline

View File

@ -0,0 +1 @@
Jenkinsfile.desktop

View File

@ -0,0 +1 @@
Jenkinsfile.desktop

View File

@ -64,7 +64,7 @@ pipeline {
environment { environment {
PLATFORM = 'tests' PLATFORM = 'tests'
DB_CONT = "status-go-test-db-${env.EXECUTOR_NUMBER.toInteger() + 1}" DB_CONT = "status-go-test-db-${env.EXECUTOR_NUMBER.toInteger() + 1}"
DB_PORT = "${5432 + env.EXECUTOR_NUMBER.toInteger()}" DB_PORT = "${54321 + env.EXECUTOR_NUMBER.toInteger()}"
TMPDIR = "${WORKSPACE_TMP}" TMPDIR = "${WORKSPACE_TMP}"
GOPATH = "${WORKSPACE_TMP}/go" GOPATH = "${WORKSPACE_TMP}/go"
GOCACHE = "${WORKSPACE_TMP}/gocache" GOCACHE = "${WORKSPACE_TMP}/gocache"
@ -238,8 +238,8 @@ pipeline {
} }
} }
cleanup { cleanup {
dir(env.TMPDIR) { deleteDir() } cleanWs()
sh "make git-clean" dir("${env.WORKSPACE}@tmp") { deleteDir() }
} }
} // post } // post
} // pipeline } // pipeline

View File

@ -0,0 +1 @@
Jenkinsfile.desktop

View File

@ -24,29 +24,36 @@ mkdir -p "${merged_coverage_reports_path}"
mkdir -p "${test_results_path}" mkdir -p "${test_results_path}"
all_compose_files="-f ${root_path}/docker-compose.anvil.yml -f ${root_path}/docker-compose.test.status-go.yml" all_compose_files="-f ${root_path}/docker-compose.anvil.yml -f ${root_path}/docker-compose.test.status-go.yml"
project_name="status-go-func-tests-$(date +%s)"
# Run functional tests # Run functional tests
echo -e "${GRN}Running tests${RST}, HEAD: $(git rev-parse HEAD)" echo -e "${GRN}Running tests${RST}, HEAD: $(git rev-parse HEAD)"
docker-compose ${all_compose_files} up -d --build --remove-orphans docker compose -p ${project_name} ${all_compose_files} up -d --build --remove-orphans
echo -e "${GRN}Running tests-rpc${RST}" # Follow the logs, wait for them to finish echo -e "${GRN}Running tests-rpc${RST}" # Follow the logs, wait for them to finish
docker-compose ${all_compose_files} logs -f tests-rpc > "${root_path}/tests-rpc.log" docker compose -p ${project_name} ${all_compose_files} logs -f tests-rpc > "${root_path}/tests-rpc.log"
# Stop containers # Stop containers
echo -e "${GRN}Stopping docker containers${RST}" echo -e "${GRN}Stopping docker containers${RST}"
docker-compose ${all_compose_files} stop docker compose -p ${project_name} ${all_compose_files} stop
# Save logs # Save logs
echo -e "${GRN}Saving logs${RST}" echo -e "${GRN}Saving logs${RST}"
docker-compose ${all_compose_files} logs status-go > "${root_path}/statusd.log" docker compose -p ${project_name} ${all_compose_files} logs status-go > "${root_path}/statusd.log"
docker-compose ${all_compose_files} logs status-go-no-funds > "${root_path}/statusd-no-funds.log" docker compose -p ${project_name} ${all_compose_files} logs status-backend > "${root_path}/status-backend.log"
if [ "$(uname)" = "Darwin" ]; then
separator="-"
else
separator="_"
fi
# Retrieve exit code # Retrieve exit code
exit_code=$(docker inspect tests-functional_tests-rpc_1 -f '{{.State.ExitCode}}'); exit_code=$(docker inspect ${project_name}${separator}tests-rpc${separator}1 -f '{{.State.ExitCode}}');
# Cleanup containers # Cleanup containers
echo -e "${GRN}Removing docker containers${RST}" echo -e "${GRN}Removing docker containers${RST}"
docker-compose ${all_compose_files} down docker compose -p ${project_name} ${all_compose_files} down
# Collect coverage reports # Collect coverage reports
echo -e "${GRN}Collecting code coverage reports${RST}" echo -e "${GRN}Collecting code coverage reports${RST}"

View File

@ -14,11 +14,11 @@ import (
"time" "time"
"github.com/google/uuid" "github.com/google/uuid"
"go.uber.org/zap"
gethkeystore "github.com/ethereum/go-ethereum/accounts/keystore" gethkeystore "github.com/ethereum/go-ethereum/accounts/keystore"
gethcommon "github.com/ethereum/go-ethereum/common" gethcommon "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil" "github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/log"
"github.com/status-im/status-go/account/generator" "github.com/status-im/status-go/account/generator"
"github.com/status-im/status-go/eth-node/crypto" "github.com/status-im/status-go/eth-node/crypto"
"github.com/status-im/status-go/eth-node/keystore" "github.com/status-im/status-go/eth-node/keystore"
@ -100,6 +100,8 @@ type DefaultManager struct {
selectedChatAccount *SelectedExtKey // account that was processed during the last call to SelectAccount() selectedChatAccount *SelectedExtKey // account that was processed during the last call to SelectAccount()
mainAccountAddress types.Address mainAccountAddress types.Address
watchAddresses []types.Address watchAddresses []types.Address
logger *zap.Logger
} }
// GetKeystore is only used in tests // GetKeystore is only used in tests
@ -642,13 +644,13 @@ func (m *DefaultManager) ReEncryptKeyStoreDir(keyDirPath, oldPass, newPass strin
err = os.RemoveAll(tempKeyDirPath) err = os.RemoveAll(tempKeyDirPath)
if err != nil { if err != nil {
// the re-encryption is complete so we don't throw // the re-encryption is complete so we don't throw
log.Error("unable to delete tempKeyDirPath, manual cleanup required") m.logger.Error("unable to delete tempKeyDirPath, manual cleanup required")
} }
err = os.RemoveAll(backupKeyDirPath) err = os.RemoveAll(backupKeyDirPath)
if err != nil { if err != nil {
// the re-encryption is complete so we don't throw // the re-encryption is complete so we don't throw
log.Error("unable to delete backupKeyDirPath, manual cleanup required") m.logger.Error("unable to delete backupKeyDirPath, manual cleanup required")
} }
return nil return nil

View File

@ -3,6 +3,8 @@ package account
import ( import (
"time" "time"
"go.uber.org/zap"
"github.com/ethereum/go-ethereum/accounts" "github.com/ethereum/go-ethereum/accounts"
"github.com/status-im/status-go/account/generator" "github.com/status-im/status-go/account/generator"
@ -17,9 +19,12 @@ type GethManager struct {
} }
// NewGethManager returns new node account manager. // NewGethManager returns new node account manager.
func NewGethManager() *GethManager { func NewGethManager(logger *zap.Logger) *GethManager {
m := &GethManager{} m := &GethManager{}
m.DefaultManager = &DefaultManager{accountsGenerator: generator.New(m)} m.DefaultManager = &DefaultManager{
accountsGenerator: generator.New(m),
logger: logger,
}
return m return m
} }

View File

@ -11,6 +11,7 @@ import (
"github.com/status-im/status-go/eth-node/crypto" "github.com/status-im/status-go/eth-node/crypto"
"github.com/status-im/status-go/eth-node/keystore" "github.com/status-im/status-go/eth-node/keystore"
"github.com/status-im/status-go/eth-node/types" "github.com/status-im/status-go/eth-node/types"
"github.com/status-im/status-go/protocol/tt"
"github.com/status-im/status-go/t/utils" "github.com/status-im/status-go/t/utils"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
@ -21,7 +22,7 @@ const testPassword = "test-password"
const newTestPassword = "new-test-password" const newTestPassword = "new-test-password"
func TestVerifyAccountPassword(t *testing.T) { func TestVerifyAccountPassword(t *testing.T) {
accManager := NewGethManager() accManager := NewGethManager(tt.MustCreateTestLogger())
keyStoreDir := t.TempDir() keyStoreDir := t.TempDir()
emptyKeyStoreDir := t.TempDir() emptyKeyStoreDir := t.TempDir()
@ -103,7 +104,7 @@ func TestVerifyAccountPasswordWithAccountBeforeEIP55(t *testing.T) {
err := utils.ImportTestAccount(keyStoreDir, "test-account3-before-eip55.pk") err := utils.ImportTestAccount(keyStoreDir, "test-account3-before-eip55.pk")
require.NoError(t, err) require.NoError(t, err)
accManager := NewGethManager() accManager := NewGethManager(tt.MustCreateTestLogger())
address := types.HexToAddress(utils.TestConfig.Account3.WalletAddress) address := types.HexToAddress(utils.TestConfig.Account3.WalletAddress)
_, err = accManager.VerifyAccountPassword(keyStoreDir, address.Hex(), utils.TestConfig.Account3.Password) _, err = accManager.VerifyAccountPassword(keyStoreDir, address.Hex(), utils.TestConfig.Account3.Password)
@ -133,7 +134,7 @@ type testAccount struct {
// SetupTest is used here for reinitializing the mock before every // SetupTest is used here for reinitializing the mock before every
// test function to avoid faulty execution. // test function to avoid faulty execution.
func (s *ManagerTestSuite) SetupTest() { func (s *ManagerTestSuite) SetupTest() {
s.accManager = NewGethManager() s.accManager = NewGethManager(tt.MustCreateTestLogger())
keyStoreDir := s.T().TempDir() keyStoreDir := s.T().TempDir()
s.Require().NoError(s.accManager.InitKeystore(keyStoreDir)) s.Require().NoError(s.accManager.InitKeystore(keyStoreDir))

View File

@ -32,6 +32,7 @@ import (
"github.com/status-im/status-go/node" "github.com/status-im/status-go/node"
"github.com/status-im/status-go/params" "github.com/status-im/status-go/params"
"github.com/status-im/status-go/protocol/requests" "github.com/status-im/status-go/protocol/requests"
"github.com/status-im/status-go/protocol/tt"
"github.com/status-im/status-go/rpc" "github.com/status-im/status-go/rpc"
"github.com/status-im/status-go/services/typeddata" "github.com/status-im/status-go/services/typeddata"
"github.com/status-im/status-go/services/wallet" "github.com/status-im/status-go/services/wallet"
@ -95,7 +96,10 @@ func setupGethStatusBackend() (*GethStatusBackend, func() error, func() error, f
if err != nil { if err != nil {
return nil, nil, nil, nil, err return nil, nil, nil, nil, err
} }
backend := NewGethStatusBackend() backend := NewGethStatusBackend(tt.MustCreateTestLogger())
if err != nil {
return nil, nil, nil, nil, err
}
backend.StatusNode().SetAppDB(db) backend.StatusNode().SetAppDB(db)
ma, stop2, err := setupTestMultiDB() ma, stop2, err := setupTestMultiDB()
@ -292,7 +296,8 @@ func TestBackendGettersConcurrently(t *testing.T) {
func TestBackendConnectionChangesConcurrently(t *testing.T) { func TestBackendConnectionChangesConcurrently(t *testing.T) {
connections := [...]string{connection.Wifi, connection.Cellular, connection.Unknown} connections := [...]string{connection.Wifi, connection.Cellular, connection.Unknown}
backend := NewGethStatusBackend() backend := NewGethStatusBackend(tt.MustCreateTestLogger())
count := 3 count := 3
var wg sync.WaitGroup var wg sync.WaitGroup
@ -310,7 +315,8 @@ func TestBackendConnectionChangesConcurrently(t *testing.T) {
} }
func TestBackendConnectionChangesToOffline(t *testing.T) { func TestBackendConnectionChangesToOffline(t *testing.T) {
b := NewGethStatusBackend() b := NewGethStatusBackend(tt.MustCreateTestLogger())
b.ConnectionChange(connection.None, false) b.ConnectionChange(connection.None, false)
assert.True(t, b.connectionState.Offline) assert.True(t, b.connectionState.Offline)
@ -386,7 +392,7 @@ func TestBackendCallRPCConcurrently(t *testing.T) {
} }
func TestAppStateChange(t *testing.T) { func TestAppStateChange(t *testing.T) {
backend := NewGethStatusBackend() backend := NewGethStatusBackend(tt.MustCreateTestLogger())
var testCases = []struct { var testCases = []struct {
name string name string
@ -460,7 +466,7 @@ func TestBlockedRPCMethods(t *testing.T) {
} }
func TestCallRPCWithStoppedNode(t *testing.T) { func TestCallRPCWithStoppedNode(t *testing.T) {
backend := NewGethStatusBackend() backend := NewGethStatusBackend(tt.MustCreateTestLogger())
resp, err := backend.CallRPC( resp, err := backend.CallRPC(
`{"jsonrpc":"2.0","method":"web3_clientVersion","params":[],"id":1}`, `{"jsonrpc":"2.0","method":"web3_clientVersion","params":[],"id":1}`,
@ -699,7 +705,8 @@ func TestBackendGetVerifiedAccount(t *testing.T) {
func TestRuntimeLogLevelIsNotWrittenToDatabase(t *testing.T) { func TestRuntimeLogLevelIsNotWrittenToDatabase(t *testing.T) {
utils.Init() utils.Init()
b := NewGethStatusBackend() b := NewGethStatusBackend(tt.MustCreateTestLogger())
chatKey, err := gethcrypto.GenerateKey() chatKey, err := gethcrypto.GenerateKey()
require.NoError(t, err) require.NoError(t, err)
walletKey, err := gethcrypto.GenerateKey() walletKey, err := gethcrypto.GenerateKey()
@ -767,7 +774,8 @@ func TestRuntimeLogLevelIsNotWrittenToDatabase(t *testing.T) {
func TestLoginWithKey(t *testing.T) { func TestLoginWithKey(t *testing.T) {
utils.Init() utils.Init()
b := NewGethStatusBackend() b := NewGethStatusBackend(tt.MustCreateTestLogger())
chatKey, err := gethcrypto.GenerateKey() chatKey, err := gethcrypto.GenerateKey()
require.NoError(t, err) require.NoError(t, err)
walletKey, err := gethcrypto.GenerateKey() walletKey, err := gethcrypto.GenerateKey()
@ -825,7 +833,8 @@ func TestLoginAccount(t *testing.T) {
tmpdir := t.TempDir() tmpdir := t.TempDir()
nameserver := "8.8.8.8" nameserver := "8.8.8.8"
b := NewGethStatusBackend() b := NewGethStatusBackend(tt.MustCreateTestLogger())
createAccountRequest := &requests.CreateAccount{ createAccountRequest := &requests.CreateAccount{
DisplayName: "some-display-name", DisplayName: "some-display-name",
CustomizationColor: "#ffffff", CustomizationColor: "#ffffff",
@ -855,6 +864,7 @@ func TestLoginAccount(t *testing.T) {
acc, err := b.CreateAccountAndLogin(createAccountRequest) acc, err := b.CreateAccountAndLogin(createAccountRequest)
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, nameserver, b.config.WakuV2Config.Nameserver) require.Equal(t, nameserver, b.config.WakuV2Config.Nameserver)
require.True(t, acc.HasAcceptedTerms)
waitForLogin(c) waitForLogin(c)
require.NoError(t, b.Logout()) require.NoError(t, b.Logout())
@ -882,7 +892,8 @@ func TestLoginAccount(t *testing.T) {
func TestVerifyDatabasePassword(t *testing.T) { func TestVerifyDatabasePassword(t *testing.T) {
utils.Init() utils.Init()
b := NewGethStatusBackend() b := NewGethStatusBackend(tt.MustCreateTestLogger())
chatKey, err := gethcrypto.GenerateKey() chatKey, err := gethcrypto.GenerateKey()
require.NoError(t, err) require.NoError(t, err)
walletKey, err := gethcrypto.GenerateKey() walletKey, err := gethcrypto.GenerateKey()
@ -920,7 +931,7 @@ func TestVerifyDatabasePassword(t *testing.T) {
} }
func TestDeleteMultiaccount(t *testing.T) { func TestDeleteMultiaccount(t *testing.T) {
backend := NewGethStatusBackend() backend := NewGethStatusBackend(tt.MustCreateTestLogger())
rootDataDir := t.TempDir() rootDataDir := t.TempDir()
@ -1279,7 +1290,7 @@ func loginDesktopUser(t *testing.T, conf *params.NodeConfig) {
username := "TestUser" username := "TestUser"
passwd := "0xC888C9CE9E098D5864D3DED6EBCC140A12142263BACE3A23A36F9905F12BD64A" // #nosec G101 passwd := "0xC888C9CE9E098D5864D3DED6EBCC140A12142263BACE3A23A36F9905F12BD64A" // #nosec G101
b := NewGethStatusBackend() b := NewGethStatusBackend(tt.MustCreateTestLogger())
require.NoError(t, b.AccountManager().InitKeystore(conf.KeyStoreDir)) require.NoError(t, b.AccountManager().InitKeystore(conf.KeyStoreDir))
b.UpdateRootDataDir(conf.DataDir) b.UpdateRootDataDir(conf.DataDir)
@ -1328,7 +1339,7 @@ func TestChangeDatabasePassword(t *testing.T) {
oldPassword := "password" oldPassword := "password"
newPassword := "newPassword" newPassword := "newPassword"
backend := NewGethStatusBackend() backend := NewGethStatusBackend(tt.MustCreateTestLogger())
backend.UpdateRootDataDir(t.TempDir()) backend.UpdateRootDataDir(t.TempDir())
// Setup keystore to test decryption of it // Setup keystore to test decryption of it
@ -1385,7 +1396,7 @@ func TestCreateWallet(t *testing.T) {
password := "some-password2" // nolint: goconst password := "some-password2" // nolint: goconst
tmpdir := t.TempDir() tmpdir := t.TempDir()
b := NewGethStatusBackend() b := NewGethStatusBackend(tt.MustCreateTestLogger())
defer func() { defer func() {
require.NoError(t, b.StopNode()) require.NoError(t, b.StopNode())
}() }()
@ -1450,7 +1461,7 @@ func TestSetFleet(t *testing.T) {
password := "some-password2" // nolint: goconst password := "some-password2" // nolint: goconst
tmpdir := t.TempDir() tmpdir := t.TempDir()
b := NewGethStatusBackend() b := NewGethStatusBackend(tt.MustCreateTestLogger())
createAccountRequest := &requests.CreateAccount{ createAccountRequest := &requests.CreateAccount{
DisplayName: "some-display-name", DisplayName: "some-display-name",
CustomizationColor: "#ffffff", CustomizationColor: "#ffffff",
@ -1519,7 +1530,7 @@ func TestWalletConfigOnLoginAccount(t *testing.T) {
raribleMainnetAPIKey := "rarible-mainnet-api-key" // nolint: gosec raribleMainnetAPIKey := "rarible-mainnet-api-key" // nolint: gosec
raribleTestnetAPIKey := "rarible-testnet-api-key" // nolint: gosec raribleTestnetAPIKey := "rarible-testnet-api-key" // nolint: gosec
b := NewGethStatusBackend() b := NewGethStatusBackend(tt.MustCreateTestLogger())
createAccountRequest := &requests.CreateAccount{ createAccountRequest := &requests.CreateAccount{
DisplayName: "some-display-name", DisplayName: "some-display-name",
CustomizationColor: "#ffffff", CustomizationColor: "#ffffff",
@ -1584,7 +1595,7 @@ func TestTestnetEnabledSettingOnCreateAccount(t *testing.T) {
utils.Init() utils.Init()
tmpdir := t.TempDir() tmpdir := t.TempDir()
b := NewGethStatusBackend() b := NewGethStatusBackend(tt.MustCreateTestLogger())
// Creating an account with test networks enabled // Creating an account with test networks enabled
createAccountRequest1 := &requests.CreateAccount{ createAccountRequest1 := &requests.CreateAccount{
@ -1630,7 +1641,7 @@ func TestRestoreAccountAndLogin(t *testing.T) {
utils.Init() utils.Init()
tmpdir := t.TempDir() tmpdir := t.TempDir()
backend := NewGethStatusBackend() backend := NewGethStatusBackend(tt.MustCreateTestLogger())
// Test case 1: Valid restore account request // Test case 1: Valid restore account request
restoreRequest := &requests.RestoreAccount{ restoreRequest := &requests.RestoreAccount{
@ -1665,7 +1676,7 @@ func TestRestoreAccountAndLoginWithoutDisplayName(t *testing.T) {
utils.Init() utils.Init()
tmpdir := t.TempDir() tmpdir := t.TempDir()
backend := NewGethStatusBackend() backend := NewGethStatusBackend(tt.MustCreateTestLogger())
// Test case: Valid restore account request without DisplayName // Test case: Valid restore account request without DisplayName
restoreRequest := &requests.RestoreAccount{ restoreRequest := &requests.RestoreAccount{
@ -1684,6 +1695,30 @@ func TestRestoreAccountAndLoginWithoutDisplayName(t *testing.T) {
require.NotEmpty(t, account.Name) require.NotEmpty(t, account.Name)
} }
func TestAcceptTerms(t *testing.T) {
tmpdir := t.TempDir()
b := NewGethStatusBackend(tt.MustCreateTestLogger())
conf, err := params.NewNodeConfig(tmpdir, 1777)
require.NoError(t, err)
require.NoError(t, b.AccountManager().InitKeystore(conf.KeyStoreDir))
b.UpdateRootDataDir(conf.DataDir)
require.NoError(t, b.OpenAccounts())
nameserver := "8.8.8.8"
createAccountRequest := &requests.CreateAccount{
DisplayName: "some-display-name",
CustomizationColor: "#ffffff",
Password: "some-password",
RootDataDir: tmpdir,
LogFilePath: tmpdir + "/log",
WakuV2Nameserver: &nameserver,
WakuV2Fleet: "status.staging",
}
_, err = b.CreateAccountAndLogin(createAccountRequest)
require.NoError(t, err)
err = b.AcceptTerms()
require.NoError(t, err)
}
func TestCreateAccountPathsValidation(t *testing.T) { func TestCreateAccountPathsValidation(t *testing.T) {
tmpdir := t.TempDir() tmpdir := t.TempDir()
@ -1825,7 +1860,8 @@ func TestRestoreKeycardAccountAndLogin(t *testing.T) {
conf, err := params.NewNodeConfig(tmpdir, 1777) conf, err := params.NewNodeConfig(tmpdir, 1777)
require.NoError(t, err) require.NoError(t, err)
backend := NewGethStatusBackend() backend := NewGethStatusBackend(tt.MustCreateTestLogger())
require.NoError(t, err)
require.NoError(t, backend.AccountManager().InitKeystore(conf.KeyStoreDir)) require.NoError(t, backend.AccountManager().InitKeystore(conf.KeyStoreDir))
backend.UpdateRootDataDir(conf.DataDir) backend.UpdateRootDataDir(conf.DataDir)

View File

@ -9,6 +9,7 @@ import (
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"github.com/status-im/status-go/protocol/requests" "github.com/status-im/status-go/protocol/requests"
"github.com/status-im/status-go/protocol/tt"
) )
func TestCreateAccountAndLogin(t *testing.T) { func TestCreateAccountAndLogin(t *testing.T) {
@ -43,7 +44,7 @@ func TestCreateAccountAndLogin(t *testing.T) {
var request requests.CreateAccount var request requests.CreateAccount
err := json.Unmarshal([]byte(requestJSON), &request) err := json.Unmarshal([]byte(requestJSON), &request)
require.NoError(t, err) require.NoError(t, err)
statusBackend := NewGethStatusBackend() statusBackend := NewGethStatusBackend(tt.MustCreateTestLogger())
_, err = statusBackend.CreateAccountAndLogin(&request) _, err = statusBackend.CreateAccountAndLogin(&request)
require.NoError(t, err) require.NoError(t, err)
t.Logf("TestCreateAccountAndLogin: create account user1 and login successfully") t.Logf("TestCreateAccountAndLogin: create account user1 and login successfully")

View File

@ -23,7 +23,6 @@ import (
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil" "github.com/ethereum/go-ethereum/common/hexutil"
ethcrypto "github.com/ethereum/go-ethereum/crypto" ethcrypto "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/log"
signercore "github.com/ethereum/go-ethereum/signer/core/apitypes" signercore "github.com/ethereum/go-ethereum/signer/core/apitypes"
"github.com/status-im/status-go/account" "github.com/status-im/status-go/account"
@ -97,33 +96,40 @@ type GethStatusBackend struct {
connectionState connection.State connectionState connection.State
appState appState appState appState
selectedAccountKeyID string selectedAccountKeyID string
log log.Logger
allowAllRPC bool // used only for tests, disables api method restrictions allowAllRPC bool // used only for tests, disables api method restrictions
LocalPairingStateManager *statecontrol.ProcessStateManager LocalPairingStateManager *statecontrol.ProcessStateManager
centralizedMetrics *centralizedmetrics.MetricService centralizedMetrics *centralizedmetrics.MetricService
logger *zap.Logger
} }
// NewGethStatusBackend create a new GethStatusBackend instance // NewGethStatusBackend create a new GethStatusBackend instance
func NewGethStatusBackend() *GethStatusBackend { func NewGethStatusBackend(logger *zap.Logger) *GethStatusBackend {
defer log.Info("Status backend initialized", "backend", "geth", "version", params.Version, "commit", params.GitCommit, "IpfsGatewayURL", params.IpfsGatewayURL) logger = logger.Named("GethStatusBackend")
backend := &GethStatusBackend{
backend := &GethStatusBackend{} logger: logger,
}
backend.initialize() backend.initialize()
logger.Info("Status backend initialized",
zap.String("backend geth version", params.Version),
zap.String("commit", params.GitCommit),
zap.String("IpfsGatewayURL", params.IpfsGatewayURL))
return backend return backend
} }
func (b *GethStatusBackend) initialize() { func (b *GethStatusBackend) initialize() {
accountManager := account.NewGethManager() accountManager := account.NewGethManager(b.logger)
transactor := transactions.NewTransactor() transactor := transactions.NewTransactor()
personalAPI := personal.NewAPI() personalAPI := personal.NewAPI()
statusNode := node.New(transactor) statusNode := node.New(transactor, b.logger)
b.statusNode = statusNode b.statusNode = statusNode
b.accountManager = accountManager b.accountManager = accountManager
b.transactor = transactor b.transactor = transactor
b.personalAPI = personalAPI b.personalAPI = personalAPI
b.statusNode.SetMultiaccountsDB(b.multiaccountsDB) b.statusNode.SetMultiaccountsDB(b.multiaccountsDB)
b.log = log.New("package", "status-go/api.GethStatusBackend")
b.LocalPairingStateManager = new(statecontrol.ProcessStateManager) b.LocalPairingStateManager = new(statecontrol.ProcessStateManager)
b.LocalPairingStateManager.SetPairing(false) b.LocalPairingStateManager.SetPairing(false)
} }
@ -182,12 +188,12 @@ func (b *GethStatusBackend) OpenAccounts() error {
} }
db, err := multiaccounts.InitializeDB(filepath.Join(b.rootDataDir, "accounts.sql")) db, err := multiaccounts.InitializeDB(filepath.Join(b.rootDataDir, "accounts.sql"))
if err != nil { if err != nil {
b.log.Error("failed to initialize accounts db", "err", err) b.logger.Error("failed to initialize accounts db", zap.Error(err))
return err return err
} }
b.multiaccountsDB = db b.multiaccountsDB = db
b.centralizedMetrics = centralizedmetrics.NewDefaultMetricService(b.multiaccountsDB.DB()) b.centralizedMetrics = centralizedmetrics.NewDefaultMetricService(b.multiaccountsDB.DB(), b.logger)
err = b.centralizedMetrics.EnsureStarted() err = b.centralizedMetrics.EnsureStarted()
if err != nil { if err != nil {
return err return err
@ -198,7 +204,7 @@ func (b *GethStatusBackend) OpenAccounts() error {
err = b.statusNode.StartMediaServerWithoutDB() err = b.statusNode.StartMediaServerWithoutDB()
if err != nil { if err != nil {
b.log.Error("failed to start media server without app db", "err", err) b.logger.Error("failed to start media server without app db", zap.Error(err))
return err return err
} }
@ -238,6 +244,24 @@ func (b *GethStatusBackend) GetAccounts() ([]multiaccounts.Account, error) {
return b.multiaccountsDB.GetAccounts() return b.multiaccountsDB.GetAccounts()
} }
func (b *GethStatusBackend) AcceptTerms() error {
b.mu.Lock()
defer b.mu.Unlock()
if b.multiaccountsDB == nil {
return errors.New("accounts db wasn't initialized")
}
accounts, err := b.multiaccountsDB.GetAccounts()
if err != nil {
return err
}
if len(accounts) == 0 {
return errors.New("accounts is empty")
}
return b.multiaccountsDB.UpdateHasAcceptedTerms(accounts[0].KeyUID, true)
}
func (b *GethStatusBackend) getAccountByKeyUID(keyUID string) (*multiaccounts.Account, error) { func (b *GethStatusBackend) getAccountByKeyUID(keyUID string) (*multiaccounts.Account, error) {
b.mu.Lock() b.mu.Lock()
defer b.mu.Unlock() defer b.mu.Unlock()
@ -329,7 +353,7 @@ func (b *GethStatusBackend) DeleteImportedKey(address, password, keyStoreDir str
if strings.Contains(fileInfo.Name(), address) { if strings.Contains(fileInfo.Name(), address) {
_, err := b.accountManager.VerifyAccountPassword(keyStoreDir, "0x"+address, password) _, err := b.accountManager.VerifyAccountPassword(keyStoreDir, "0x"+address, password)
if err != nil { if err != nil {
b.log.Error("failed to verify account", "account", address, "error", err) b.logger.Error("failed to verify account", zap.String("account", address), zap.Error(err))
return err return err
} }
@ -409,7 +433,7 @@ func (b *GethStatusBackend) ensureAppDBOpened(account multiaccounts.Account, pas
appdatabase.CurrentAppDBKeyUID = account.KeyUID appdatabase.CurrentAppDBKeyUID = account.KeyUID
b.appDB, err = appdatabase.InitializeDB(dbFilePath, password, account.KDFIterations) b.appDB, err = appdatabase.InitializeDB(dbFilePath, password, account.KDFIterations)
if err != nil { if err != nil {
b.log.Error("failed to initialize db", "err", err.Error()) b.logger.Error("failed to initialize db", zap.Error(err))
return err return err
} }
b.statusNode.SetAppDB(b.appDB) b.statusNode.SetAppDB(b.appDB)
@ -456,7 +480,7 @@ func (b *GethStatusBackend) ensureWalletDBOpened(account multiaccounts.Account,
b.walletDB, err = walletdatabase.InitializeDB(dbWalletPath, password, account.KDFIterations) b.walletDB, err = walletdatabase.InitializeDB(dbWalletPath, password, account.KDFIterations)
if err != nil { if err != nil {
b.log.Error("failed to initialize wallet db", "err", err.Error()) b.logger.Error("failed to initialize wallet db", zap.Error(err))
return err return err
} }
b.statusNode.SetWalletDB(b.walletDB) b.statusNode.SetWalletDB(b.walletDB)
@ -665,7 +689,7 @@ func (b *GethStatusBackend) loginAccount(request *requests.Login) error {
err = b.StartNode(b.config) err = b.StartNode(b.config)
if err != nil { if err != nil {
b.log.Info("failed to start node") b.logger.Info("failed to start node")
return errors.Wrap(err, "failed to start node") return errors.Wrap(err, "failed to start node")
} }
@ -693,7 +717,7 @@ func (b *GethStatusBackend) loginAccount(request *requests.Login) error {
err = b.multiaccountsDB.UpdateAccountTimestamp(acc.KeyUID, time.Now().Unix()) err = b.multiaccountsDB.UpdateAccountTimestamp(acc.KeyUID, time.Now().Unix())
if err != nil { if err != nil {
b.log.Error("failed to update account") b.logger.Error("failed to update account")
return errors.Wrap(err, "failed to update account") return errors.Wrap(err, "failed to update account")
} }
@ -721,9 +745,9 @@ func (b *GethStatusBackend) UpdateNodeConfigFleet(acc multiaccounts.Account, pas
fleet := accountSettings.GetFleet() fleet := accountSettings.GetFleet()
if !params.IsFleetSupported(fleet) { if !params.IsFleetSupported(fleet) {
b.log.Warn("fleet is not supported, overriding with default value", b.logger.Warn("fleet is not supported, overriding with default value",
"fleet", fleet, zap.String("fleet", fleet),
"defaultFleet", DefaultFleet) zap.String("defaultFleet", DefaultFleet))
fleet = DefaultFleet fleet = DefaultFleet
} }
@ -788,7 +812,7 @@ func (b *GethStatusBackend) startNodeWithAccount(acc multiaccounts.Account, pass
err = b.StartNode(b.config) err = b.StartNode(b.config)
if err != nil { if err != nil {
b.log.Info("failed to start node") b.logger.Info("failed to start node")
return err return err
} }
@ -817,7 +841,7 @@ func (b *GethStatusBackend) startNodeWithAccount(acc multiaccounts.Account, pass
err = b.multiaccountsDB.UpdateAccountTimestamp(acc.KeyUID, time.Now().Unix()) err = b.multiaccountsDB.UpdateAccountTimestamp(acc.KeyUID, time.Now().Unix())
if err != nil { if err != nil {
b.log.Info("failed to update account") b.logger.Info("failed to update account")
return err return err
} }
@ -941,7 +965,7 @@ func (b *GethStatusBackend) ExportUnencryptedDatabase(acc multiaccounts.Account,
err = sqlite.DecryptDB(dbPath, directory, password, acc.KDFIterations) err = sqlite.DecryptDB(dbPath, directory, password, acc.KDFIterations)
if err != nil { if err != nil {
b.log.Error("failed to initialize db", "err", err) b.logger.Error("failed to initialize db", zap.Error(err))
return err return err
} }
return nil return nil
@ -961,7 +985,7 @@ func (b *GethStatusBackend) ImportUnencryptedDatabase(acc multiaccounts.Account,
err = sqlite.EncryptDB(databasePath, path, password, acc.KDFIterations, signal.SendReEncryptionStarted, signal.SendReEncryptionFinished) err = sqlite.EncryptDB(databasePath, path, password, acc.KDFIterations, signal.SendReEncryptionStarted, signal.SendReEncryptionFinished)
if err != nil { if err != nil {
b.log.Error("failed to initialize db", "err", err) b.logger.Error("failed to initialize db", zap.Error(err))
return err return err
} }
return nil return nil
@ -1040,7 +1064,7 @@ func (b *GethStatusBackend) ChangeDatabasePassword(keyUID string, password strin
// Revert the password to original // Revert the password to original
err2 := b.changeAppDBPassword(account, noLogout, newPassword, password) err2 := b.changeAppDBPassword(account, noLogout, newPassword, password)
if err2 != nil { if err2 != nil {
log.Error("failed to revert app db password", "err", err2) b.logger.Error("failed to revert app db password", zap.Error(err2))
} }
return err return err
@ -1327,7 +1351,7 @@ func (b *GethStatusBackend) RestoreAccountAndLogin(request *requests.RestoreAcco
) )
if err != nil { if err != nil {
b.log.Error("start node", err) b.logger.Error("start node", zap.Error(err))
return nil, err return nil, err
} }
@ -1392,7 +1416,7 @@ func (b *GethStatusBackend) RestoreKeycardAccountAndLogin(request *requests.Rest
) )
if err != nil { if err != nil {
b.log.Error("start node", err) b.logger.Error("start node", zap.Error(err))
return nil, errors.Wrap(err, "failed to start node") return nil, errors.Wrap(err, "failed to start node")
} }
@ -1580,6 +1604,14 @@ func (b *GethStatusBackend) buildAccount(request *requests.CreateAccount, input
acc.KDFIterations = dbsetup.ReducedKDFIterationsNumber acc.KDFIterations = dbsetup.ReducedKDFIterationsNumber
} }
count, err := b.multiaccountsDB.GetAccountsCount()
if err != nil {
return nil, err
}
if count == 0 {
acc.HasAcceptedTerms = true
}
if request.ImagePath != "" { if request.ImagePath != "" {
imageCropRectangle := request.ImageCropRectangle imageCropRectangle := request.ImageCropRectangle
if imageCropRectangle == nil { if imageCropRectangle == nil {
@ -1736,7 +1768,7 @@ func (b *GethStatusBackend) CreateAccountAndLogin(request *requests.CreateAccoun
) )
if err != nil { if err != nil {
b.log.Error("start node", err) b.logger.Error("start node", zap.Error(err))
return nil, err return nil, err
} }
@ -2040,7 +2072,7 @@ func (b *GethStatusBackend) loadNodeConfig(inputNodeCfg *params.NodeConfig) erro
if _, err = os.Stat(conf.RootDataDir); os.IsNotExist(err) { if _, err = os.Stat(conf.RootDataDir); os.IsNotExist(err) {
if err := os.MkdirAll(conf.RootDataDir, os.ModePerm); err != nil { if err := os.MkdirAll(conf.RootDataDir, os.ModePerm); err != nil {
b.log.Warn("failed to create data directory", zap.Error(err)) b.logger.Warn("failed to create data directory", zap.Error(err))
return err return err
} }
} }
@ -2079,8 +2111,8 @@ func (b *GethStatusBackend) startNode(config *params.NodeConfig) (err error) {
} }
}() }()
b.log.Info("status-go version details", "version", params.Version, "commit", params.GitCommit) b.logger.Info("status-go version details", zap.String("version", params.Version), zap.String("commit", params.GitCommit))
b.log.Debug("starting node with config", "config", config) b.logger.Debug("starting node with config", zap.Stringer("config", config))
// Update config with some defaults. // Update config with some defaults.
if err := config.UpdateWithDefaults(); err != nil { if err := config.UpdateWithDefaults(); err != nil {
return err return err
@ -2089,7 +2121,7 @@ func (b *GethStatusBackend) startNode(config *params.NodeConfig) (err error) {
// Updating node config // Updating node config
b.config = config b.config = config
b.log.Debug("updated config with defaults", "config", config) b.logger.Debug("updated config with defaults", zap.Stringer("config", config))
// Start by validating configuration // Start by validating configuration
if err := config.Validate(); err != nil { if err := config.Validate(); err != nil {
@ -2125,10 +2157,10 @@ func (b *GethStatusBackend) startNode(config *params.NodeConfig) (err error) {
b.personalAPI.SetRPC(b.statusNode.RPCClient(), rpc.DefaultCallTimeout) b.personalAPI.SetRPC(b.statusNode.RPCClient(), rpc.DefaultCallTimeout)
if err = b.registerHandlers(); err != nil { if err = b.registerHandlers(); err != nil {
b.log.Error("Handler registration failed", "err", err) b.logger.Error("Handler registration failed", zap.Error(err))
return return
} }
b.log.Info("Handlers registered") b.logger.Info("Handlers registered")
// Handle a case when a node is stopped and resumed. // Handle a case when a node is stopped and resumed.
// If there is no account selected, an error is returned. // If there is no account selected, an error is returned.
@ -2325,17 +2357,17 @@ func (b *GethStatusBackend) getVerifiedWalletAccount(address, password string) (
config := b.StatusNode().Config() config := b.StatusNode().Config()
db, err := accounts.NewDB(b.appDB) db, err := accounts.NewDB(b.appDB)
if err != nil { if err != nil {
b.log.Error("failed to create new *Database instance", "error", err) b.logger.Error("failed to create new *Database instance", zap.Error(err))
return nil, err return nil, err
} }
exists, err := db.AddressExists(types.HexToAddress(address)) exists, err := db.AddressExists(types.HexToAddress(address))
if err != nil { if err != nil {
b.log.Error("failed to query db for a given address", "address", address, "error", err) b.logger.Error("failed to query db for a given address", zap.String("address", address), zap.Error(err))
return nil, err return nil, err
} }
if !exists { if !exists {
b.log.Error("failed to get a selected account", "err", transactions.ErrInvalidTxSender) b.logger.Error("failed to get a selected account", zap.Error(transactions.ErrInvalidTxSender))
return nil, transactions.ErrAccountDoesntExist return nil, transactions.ErrAccountDoesntExist
} }
@ -2348,7 +2380,7 @@ func (b *GethStatusBackend) getVerifiedWalletAccount(address, password string) (
} }
if err != nil { if err != nil {
b.log.Error("failed to verify account", "account", address, "error", err) b.logger.Error("failed to verify account", zap.String("account", address), zap.Error(err))
return nil, err return nil, err
} }
@ -2362,7 +2394,7 @@ func (b *GethStatusBackend) generatePartialAccountKey(db *accounts.Database, add
dbPath, err := db.GetPath(types.HexToAddress(address)) dbPath, err := db.GetPath(types.HexToAddress(address))
path := "m/" + dbPath[strings.LastIndex(dbPath, "/")+1:] path := "m/" + dbPath[strings.LastIndex(dbPath, "/")+1:]
if err != nil { if err != nil {
b.log.Error("failed to get path for given account address", "account", address, "error", err) b.logger.Error("failed to get path for given account address", zap.String("account", address), zap.Error(err))
return nil, err return nil, err
} }
@ -2436,7 +2468,7 @@ func (b *GethStatusBackend) ConnectionChange(typ string, expensive bool) {
state.Offline = true state.Offline = true
} }
b.log.Info("Network state change", "old", b.connectionState, "new", state) b.logger.Info("Network state change", zap.Stringer("old", b.connectionState), zap.Stringer("new", state))
if b.connectionState.Offline && !state.Offline { if b.connectionState.Offline && !state.Offline {
// flush hystrix if we are going again online, since it doesn't behave // flush hystrix if we are going again online, since it doesn't behave
@ -2457,14 +2489,14 @@ func (b *GethStatusBackend) AppStateChange(state string) {
var messenger *protocol.Messenger var messenger *protocol.Messenger
s, err := parseAppState(state) s, err := parseAppState(state)
if err != nil { if err != nil {
log.Error("AppStateChange failed, ignoring", "error", err) b.logger.Error("AppStateChange failed, ignoring", zap.Error(err))
return return
} }
b.appState = s b.appState = s
if b.statusNode == nil { if b.statusNode == nil {
log.Warn("statusNode nil, not reporting app state change") b.logger.Warn("statusNode nil, not reporting app state change")
return return
} }
@ -2477,7 +2509,7 @@ func (b *GethStatusBackend) AppStateChange(state string) {
} }
if messenger == nil { if messenger == nil {
log.Warn("messenger nil, not reporting app state change") b.logger.Warn("messenger nil, not reporting app state change")
return return
} }
@ -2511,7 +2543,7 @@ func (b *GethStatusBackend) Logout() error {
b.mu.Lock() b.mu.Lock()
defer b.mu.Unlock() defer b.mu.Unlock()
b.log.Debug("logging out") b.logger.Debug("logging out")
err := b.cleanupServices() err := b.cleanupServices()
if err != nil { if err != nil {
return err return err
@ -2540,7 +2572,7 @@ func (b *GethStatusBackend) Logout() error {
err = b.statusNode.StartMediaServerWithoutDB() err = b.statusNode.StartMediaServerWithoutDB()
if err != nil { if err != nil {
b.log.Error("failed to start media server without app db", "err", err) b.logger.Error("failed to start media server without app db", zap.Error(err))
return err return err
} }
return nil return nil

View File

@ -6,6 +6,8 @@ import (
"strings" "strings"
"testing" "testing"
"go.uber.org/zap"
d_common "github.com/status-im/status-go/common" d_common "github.com/status-im/status-go/common"
"github.com/status-im/status-go/appdatabase" "github.com/status-im/status-go/appdatabase"
@ -47,6 +49,7 @@ const (
type OldMobileUserUpgradingFromV1ToV2Test struct { type OldMobileUserUpgradingFromV1ToV2Test struct {
suite.Suite suite.Suite
tmpdir string tmpdir string
logger *zap.Logger
} }
type PostLoginCheckCallback func(b *GethStatusBackend) type PostLoginCheckCallback func(b *GethStatusBackend)
@ -55,6 +58,10 @@ func (s *OldMobileUserUpgradingFromV1ToV2Test) SetupTest() {
utils.Init() utils.Init()
s.tmpdir = s.T().TempDir() s.tmpdir = s.T().TempDir()
copyDir(srcFolder, s.tmpdir, s.T()) copyDir(srcFolder, s.tmpdir, s.T())
var err error
s.logger, err = zap.NewDevelopment()
s.Require().NoError(err)
} }
func TestOldMobileUserUpgradingFromV1ToV2(t *testing.T) { func TestOldMobileUserUpgradingFromV1ToV2(t *testing.T) {
@ -62,7 +69,7 @@ func TestOldMobileUserUpgradingFromV1ToV2(t *testing.T) {
} }
func (s *OldMobileUserUpgradingFromV1ToV2Test) loginMobileUser(check PostLoginCheckCallback) { func (s *OldMobileUserUpgradingFromV1ToV2Test) loginMobileUser(check PostLoginCheckCallback) {
b := NewGethStatusBackend() b := NewGethStatusBackend(s.logger)
b.UpdateRootDataDir(s.tmpdir) b.UpdateRootDataDir(s.tmpdir)
s.Require().NoError(b.OpenAccounts()) s.Require().NoError(b.OpenAccounts())
s.Require().NoError(b.Login(oldMobileUserKeyUID, oldMobileUserPasswd)) s.Require().NoError(b.Login(oldMobileUserKeyUID, oldMobileUserPasswd))
@ -141,6 +148,11 @@ func (s *OldMobileUserUpgradingFromV1ToV2Test) TestLoginAndMigrationsStillWorkWi
s.Require().True(len(keyKps[0].Accounts) == 1) s.Require().True(len(keyKps[0].Accounts) == 1)
info, err = generator.LoadAccount(keyKps[0].Accounts[0].Address.Hex(), oldMobileUserPasswd) info, err = generator.LoadAccount(keyKps[0].Accounts[0].Address.Hex(), oldMobileUserPasswd)
s.Require().NoError(err) s.Require().NoError(err)
// The user should manually accept terms, so we make sure we don't set it
// automatically by mistake.
s.Require().False(info.ToMultiAccount().HasAcceptedTerms)
s.Require().Equal(keyKps[0].KeyUID, info.KeyUID) s.Require().Equal(keyKps[0].KeyUID, info.KeyUID)
s.Require().Equal(keyKps[0].Accounts[0].KeyUID, info.KeyUID) s.Require().Equal(keyKps[0].Accounts[0].KeyUID, info.KeyUID)
info, err = generator.ImportPrivateKey("c3ad0b50652318f845565c13761e5369ce75dcbc2a94616e15b829d4b07410fe") info, err = generator.ImportPrivateKey("c3ad0b50652318f845565c13761e5369ce75dcbc2a94616e15b829d4b07410fe")
@ -154,7 +166,7 @@ func (s *OldMobileUserUpgradingFromV1ToV2Test) TestLoginAndMigrationsStillWorkWi
// TestAddWalletAccount we should be able to add a wallet account after upgrading from mobile v1 // TestAddWalletAccount we should be able to add a wallet account after upgrading from mobile v1
func (s *OldMobileUserUpgradingFromV1ToV2Test) TestAddWalletAccountAfterUpgradingFromMobileV1() { func (s *OldMobileUserUpgradingFromV1ToV2Test) TestAddWalletAccountAfterUpgradingFromMobileV1() {
b := NewGethStatusBackend() b := NewGethStatusBackend(s.logger)
b.UpdateRootDataDir(s.tmpdir) b.UpdateRootDataDir(s.tmpdir)
s.Require().NoError(b.OpenAccounts()) s.Require().NoError(b.OpenAccounts())
s.Require().NoError(b.Login(oldMobileUserKeyUID, oldMobileUserPasswd)) s.Require().NoError(b.Login(oldMobileUserKeyUID, oldMobileUserPasswd))

View File

@ -11,6 +11,7 @@ import (
"github.com/status-im/status-go/multiaccounts/settings" "github.com/status-im/status-go/multiaccounts/settings"
"github.com/status-im/status-go/params" "github.com/status-im/status-go/params"
"github.com/status-im/status-go/protocol/requests" "github.com/status-im/status-go/protocol/requests"
"github.com/status-im/status-go/protocol/tt"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -28,7 +29,7 @@ func setupWalletTest(t *testing.T, password string) (backend *GethStatusBackend,
return return
} }
backend = NewGethStatusBackend() backend = NewGethStatusBackend(tt.MustCreateTestLogger())
backend.UpdateRootDataDir(tmpdir) backend.UpdateRootDataDir(tmpdir)
err = backend.AccountManager().InitKeystore(filepath.Join(tmpdir, "keystore")) err = backend.AccountManager().InitKeystore(filepath.Join(tmpdir, "keystore"))

View File

@ -6,13 +6,15 @@ import (
"encoding/json" "encoding/json"
"math/big" "math/big"
"go.uber.org/zap"
d_common "github.com/status-im/status-go/common" d_common "github.com/status-im/status-go/common"
"github.com/status-im/status-go/logutils"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil" "github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/log"
"github.com/status-im/status-go/appdatabase/migrations" "github.com/status-im/status-go/appdatabase/migrations"
migrationsprevnodecfg "github.com/status-im/status-go/appdatabase/migrationsprevnodecfg" migrationsprevnodecfg "github.com/status-im/status-go/appdatabase/migrationsprevnodecfg"
@ -94,7 +96,7 @@ func OptimizeMobileWakuV2SettingsForMobileV1(sqlTx *sql.Tx) error {
if d_common.IsMobilePlatform() { if d_common.IsMobilePlatform() {
_, err := sqlTx.Exec(`UPDATE wakuv2_config SET light_client = ?, enable_store_confirmation_for_messages_sent = ?`, true, false) _, err := sqlTx.Exec(`UPDATE wakuv2_config SET light_client = ?, enable_store_confirmation_for_messages_sent = ?`, true, false)
if err != nil { if err != nil {
log.Error("failed to enable light client and disable store confirmation for mobile v1", "err", err.Error()) logutils.ZapLogger().Error("failed to enable light client and disable store confirmation for mobile v1", zap.Error(err))
return err return err
} }
} }
@ -104,7 +106,7 @@ func OptimizeMobileWakuV2SettingsForMobileV1(sqlTx *sql.Tx) error {
func FixMissingKeyUIDForAccounts(sqlTx *sql.Tx) error { func FixMissingKeyUIDForAccounts(sqlTx *sql.Tx) error {
rows, err := sqlTx.Query(`SELECT address,pubkey FROM accounts WHERE pubkey IS NOT NULL AND type != '' AND type != 'generated'`) rows, err := sqlTx.Query(`SELECT address,pubkey FROM accounts WHERE pubkey IS NOT NULL AND type != '' AND type != 'generated'`)
if err != nil { if err != nil {
log.Error("Migrating accounts: failed to query accounts", "err", err.Error()) logutils.ZapLogger().Error("Migrating accounts: failed to query accounts", zap.Error(err))
return err return err
} }
defer rows.Close() defer rows.Close()
@ -113,19 +115,19 @@ func FixMissingKeyUIDForAccounts(sqlTx *sql.Tx) error {
var pubkey e_types.HexBytes var pubkey e_types.HexBytes
err = rows.Scan(&address, &pubkey) err = rows.Scan(&address, &pubkey)
if err != nil { if err != nil {
log.Error("Migrating accounts: failed to scan records", "err", err.Error()) logutils.ZapLogger().Error("Migrating accounts: failed to scan records", zap.Error(err))
return err return err
} }
pk, err := crypto.UnmarshalPubkey(pubkey) pk, err := crypto.UnmarshalPubkey(pubkey)
if err != nil { if err != nil {
log.Error("Migrating accounts: failed to unmarshal pubkey", "err", err.Error(), "pubkey", string(pubkey)) logutils.ZapLogger().Error("Migrating accounts: failed to unmarshal pubkey", zap.String("pubkey", string(pubkey)), zap.Error(err))
return err return err
} }
pkBytes := sha256.Sum256(crypto.FromECDSAPub(pk)) pkBytes := sha256.Sum256(crypto.FromECDSAPub(pk))
keyUIDHex := hexutil.Encode(pkBytes[:]) keyUIDHex := hexutil.Encode(pkBytes[:])
_, err = sqlTx.Exec(`UPDATE accounts SET key_uid = ? WHERE address = ?`, keyUIDHex, address) _, err = sqlTx.Exec(`UPDATE accounts SET key_uid = ? WHERE address = ?`, keyUIDHex, address)
if err != nil { if err != nil {
log.Error("Migrating accounts: failed to update key_uid for imported accounts", "err", err.Error()) logutils.ZapLogger().Error("Migrating accounts: failed to update key_uid for imported accounts", zap.Error(err))
return err return err
} }
} }
@ -134,23 +136,23 @@ func FixMissingKeyUIDForAccounts(sqlTx *sql.Tx) error {
err = sqlTx.QueryRow(`SELECT wallet_root_address FROM settings WHERE synthetic_id='id'`).Scan(&walletRootAddress) err = sqlTx.QueryRow(`SELECT wallet_root_address FROM settings WHERE synthetic_id='id'`).Scan(&walletRootAddress)
if err == sql.ErrNoRows { if err == sql.ErrNoRows {
// we shouldn't reach here, but if we do, it probably happened from the test // we shouldn't reach here, but if we do, it probably happened from the test
log.Warn("Migrating accounts: no wallet_root_address found in settings") logutils.ZapLogger().Warn("Migrating accounts: no wallet_root_address found in settings")
return nil return nil
} }
if err != nil { if err != nil {
log.Error("Migrating accounts: failed to get wallet_root_address", "err", err.Error()) logutils.ZapLogger().Error("Migrating accounts: failed to get wallet_root_address", zap.Error(err))
return err return err
} }
_, err = sqlTx.Exec(`UPDATE accounts SET key_uid = ?, derived_from = ? WHERE type = '' OR type = 'generated'`, CurrentAppDBKeyUID, walletRootAddress.Hex()) _, err = sqlTx.Exec(`UPDATE accounts SET key_uid = ?, derived_from = ? WHERE type = '' OR type = 'generated'`, CurrentAppDBKeyUID, walletRootAddress.Hex())
if err != nil { if err != nil {
log.Error("Migrating accounts: failed to update key_uid/derived_from", "err", err.Error()) logutils.ZapLogger().Error("Migrating accounts: failed to update key_uid/derived_from", zap.Error(err))
return err return err
} }
// fix the default wallet account color issue https://github.com/status-im/status-mobile/issues/20476 // fix the default wallet account color issue https://github.com/status-im/status-mobile/issues/20476
// we don't care the other type of account's color // we don't care the other type of account's color
_, err = sqlTx.Exec(`UPDATE accounts SET color = 'blue',emoji='🐳' WHERE wallet = 1`) _, err = sqlTx.Exec(`UPDATE accounts SET color = 'blue',emoji='🐳' WHERE wallet = 1`)
if err != nil { if err != nil {
log.Error("Migrating accounts: failed to update default wallet account's color to blue", "err", err.Error()) logutils.ZapLogger().Error("Migrating accounts: failed to update default wallet account's color to blue", zap.Error(err))
return err return err
} }
return nil return nil
@ -192,7 +194,7 @@ func migrateEnsUsernames(sqlTx *sql.Tx) error {
rows, err := sqlTx.Query(`SELECT usernames FROM settings`) rows, err := sqlTx.Query(`SELECT usernames FROM settings`)
if err != nil { if err != nil {
log.Error("Migrating ens usernames: failed to query 'settings.usernames'", "err", err.Error()) logutils.ZapLogger().Error("Migrating ens usernames: failed to query 'settings.usernames'", zap.Error(err))
return err return err
} }
@ -240,7 +242,7 @@ func migrateEnsUsernames(sqlTx *sql.Tx) error {
_, err = sqlTx.Exec(`INSERT INTO ens_usernames (username, chain_id) VALUES (?, ?)`, username, defaultChainID) _, err = sqlTx.Exec(`INSERT INTO ens_usernames (username, chain_id) VALUES (?, ?)`, username, defaultChainID)
if err != nil { if err != nil {
log.Error("Migrating ens usernames: failed to insert username into new database", "ensUsername", username, "err", err.Error()) logutils.ZapLogger().Error("Migrating ens usernames: failed to insert username into new database", zap.String("ensUsername", username), zap.Error(err))
} }
} }

View File

@ -5,7 +5,7 @@ import (
"sync" "sync"
"time" "time"
"github.com/ethereum/go-ethereum/log" "go.uber.org/zap"
"github.com/status-im/status-go/centralizedmetrics/common" "github.com/status-im/status-go/centralizedmetrics/common"
"github.com/status-im/status-go/centralizedmetrics/providers" "github.com/status-im/status-go/centralizedmetrics/providers"
@ -35,20 +35,23 @@ type MetricService struct {
started bool started bool
wg sync.WaitGroup wg sync.WaitGroup
interval time.Duration interval time.Duration
logger *zap.Logger
} }
func NewDefaultMetricService(db *sql.DB) *MetricService { func NewDefaultMetricService(db *sql.DB, logger *zap.Logger) *MetricService {
repository := NewSQLiteMetricRepository(db) repository := NewSQLiteMetricRepository(db)
processor := providers.NewMixpanelMetricProcessor(providers.MixpanelAppID, providers.MixpanelToken, providers.MixpanelBaseURL) processor := providers.NewMixpanelMetricProcessor(providers.MixpanelAppID, providers.MixpanelToken, providers.MixpanelBaseURL, logger)
return NewMetricService(repository, processor, defaultPollInterval) return NewMetricService(repository, processor, defaultPollInterval, logger)
} }
func NewMetricService(repository MetricRepository, processor common.MetricProcessor, interval time.Duration) *MetricService { func NewMetricService(repository MetricRepository, processor common.MetricProcessor, interval time.Duration, logger *zap.Logger) *MetricService {
return &MetricService{ return &MetricService{
repository: repository, repository: repository,
processor: processor, processor: processor,
interval: interval, interval: interval,
done: make(chan bool), done: make(chan bool),
logger: logger.Named("MetricService"),
} }
} }
@ -116,27 +119,27 @@ func (s *MetricService) AddMetric(metric common.Metric) error {
} }
func (s *MetricService) processMetrics() { func (s *MetricService) processMetrics() {
log.Info("processing metrics") s.logger.Info("processing metrics")
metrics, err := s.repository.Poll() metrics, err := s.repository.Poll()
if err != nil { if err != nil {
log.Warn("error polling metrics", "error", err) s.logger.Warn("error polling metrics", zap.Error(err))
return return
} }
log.Info("polled metrics") s.logger.Info("polled metrics")
if len(metrics) == 0 { if len(metrics) == 0 {
return return
} }
log.Info("processing metrics") s.logger.Info("processing metrics")
if err := s.processor.Process(metrics); err != nil { if err := s.processor.Process(metrics); err != nil {
log.Warn("error processing metrics", "error", err) s.logger.Warn("error processing metrics", zap.Error(err))
return return
} }
log.Info("deleting metrics") s.logger.Info("deleting metrics")
if err := s.repository.Delete(metrics); err != nil { if err := s.repository.Delete(metrics); err != nil {
log.Warn("error deleting metrics", "error", err) s.logger.Warn("error deleting metrics", zap.Error(err))
} }
log.Info("done metrics") s.logger.Info("done metrics")
} }

View File

@ -15,11 +15,15 @@ import (
var testMetric = common.Metric{ID: "user-id", EventName: "test-name", EventValue: map[string]interface{}{"test-name": "test-value"}, Platform: "android", AppVersion: "2.30.0"} var testMetric = common.Metric{ID: "user-id", EventName: "test-name", EventValue: map[string]interface{}{"test-name": "test-value"}, Platform: "android", AppVersion: "2.30.0"}
func newMetricService(t *testing.T, repository MetricRepository, processor common.MetricProcessor, interval time.Duration) *MetricService {
return NewMetricService(repository, processor, interval, tt.MustCreateTestLogger())
}
// TestMetricService covers the main functionalities of MetricService // TestMetricService covers the main functionalities of MetricService
func TestMetricService(t *testing.T) { func TestMetricService(t *testing.T) {
repository := &TestMetricRepository{} repository := &TestMetricRepository{}
processor := &TestMetricProcessor{} processor := &TestMetricProcessor{}
service := NewMetricService(repository, processor, 1*time.Second) service := newMetricService(t, repository, processor, 1*time.Second)
// Start the service // Start the service
service.Start() service.Start()
@ -111,7 +115,7 @@ func (p *TestMetricProcessor) Process(metrics []common.Metric) error {
func TestAddMetric(t *testing.T) { func TestAddMetric(t *testing.T) {
repository := &TestMetricRepository{} repository := &TestMetricRepository{}
processor := &TestMetricProcessor{} processor := &TestMetricProcessor{}
service := NewMetricService(repository, processor, 1*time.Second) service := newMetricService(t, repository, processor, 1*time.Second)
err := service.AddMetric(testMetric) err := service.AddMetric(testMetric)
if err != nil { if err != nil {
@ -132,7 +136,7 @@ func TestAddMetric(t *testing.T) {
func TestProcessMetrics(t *testing.T) { func TestProcessMetrics(t *testing.T) {
repository := &TestMetricRepository{} repository := &TestMetricRepository{}
processor := &TestMetricProcessor{} processor := &TestMetricProcessor{}
service := NewMetricService(repository, processor, 1*time.Second) service := newMetricService(t, repository, processor, 1*time.Second)
// Add metrics directly to repository for polling // Add metrics directly to repository for polling
require.NoError(t, repository.Add(common.Metric{ID: "3", EventValue: map[string]interface{}{"price": 6.28}})) require.NoError(t, repository.Add(common.Metric{ID: "3", EventValue: map[string]interface{}{"price": 6.28}}))
@ -154,7 +158,7 @@ func TestProcessMetrics(t *testing.T) {
func TestStartStop(t *testing.T) { func TestStartStop(t *testing.T) {
repository := &TestMetricRepository{} repository := &TestMetricRepository{}
processor := &TestMetricProcessor{} processor := &TestMetricProcessor{}
service := NewMetricService(repository, processor, 1*time.Second) service := newMetricService(t, repository, processor, 1*time.Second)
service.Start() service.Start()
require.True(t, service.started) require.True(t, service.started)
@ -173,7 +177,7 @@ func TestStartStop(t *testing.T) {
func TestServiceWithoutMetrics(t *testing.T) { func TestServiceWithoutMetrics(t *testing.T) {
repository := &TestMetricRepository{} repository := &TestMetricRepository{}
processor := &TestMetricProcessor{} processor := &TestMetricProcessor{}
service := NewMetricService(repository, processor, 1*time.Second) service := newMetricService(t, repository, processor, 1*time.Second)
service.Start() service.Start()
defer service.Stop() defer service.Stop()
@ -187,7 +191,7 @@ func TestServiceWithoutMetrics(t *testing.T) {
func TestServiceEnabled(t *testing.T) { func TestServiceEnabled(t *testing.T) {
repository := &TestMetricRepository{} repository := &TestMetricRepository{}
processor := &TestMetricProcessor{} processor := &TestMetricProcessor{}
service := NewMetricService(repository, processor, 1*time.Second) service := newMetricService(t, repository, processor, 1*time.Second)
err := service.ToggleEnabled(true) err := service.ToggleEnabled(true)
require.NoError(t, err) require.NoError(t, err)
@ -201,7 +205,7 @@ func TestServiceEnabled(t *testing.T) {
func TestServiceEnsureStarted(t *testing.T) { func TestServiceEnsureStarted(t *testing.T) {
repository := &TestMetricRepository{} repository := &TestMetricRepository{}
processor := &TestMetricProcessor{} processor := &TestMetricProcessor{}
service := NewMetricService(repository, processor, 1*time.Second) service := newMetricService(t, repository, processor, 1*time.Second)
err := service.EnsureStarted() err := service.EnsureStarted()
require.NoError(t, err) require.NoError(t, err)

View File

@ -5,10 +5,11 @@ import (
"encoding/json" "encoding/json"
"errors" "errors"
"fmt" "fmt"
"io"
"net/http" "net/http"
"time" "time"
"github.com/ethereum/go-ethereum/log" "go.uber.org/zap"
"github.com/status-im/status-go/centralizedmetrics/common" "github.com/status-im/status-go/centralizedmetrics/common"
) )
@ -23,14 +24,17 @@ type AppsflyerMetricProcessor struct {
appID string appID string
secret string secret string
baseURL string baseURL string
logger *zap.Logger
} }
// NewAppsflyerMetricProcessor is a constructor for AppsflyerMetricProcessor // NewAppsflyerMetricProcessor is a constructor for AppsflyerMetricProcessor
func NewAppsflyerMetricProcessor(appID, secret, baseURL string) *AppsflyerMetricProcessor { func NewAppsflyerMetricProcessor(appID, secret, baseURL string, logger *zap.Logger) *AppsflyerMetricProcessor {
return &AppsflyerMetricProcessor{ return &AppsflyerMetricProcessor{
appID: appID, appID: appID,
secret: secret, secret: secret,
baseURL: baseURL, baseURL: baseURL,
logger: logger,
} }
} }
@ -85,7 +89,8 @@ func (p *AppsflyerMetricProcessor) sendToAppsflyer(metric common.Metric) error {
defer resp.Body.Close() defer resp.Body.Close()
if resp.StatusCode != http.StatusOK { if resp.StatusCode != http.StatusOK {
log.Warn("failed to send metric", "status-code", resp.StatusCode, "body", resp.Body) body, err := io.ReadAll(resp.Body)
p.logger.Warn("failed to send metric", zap.Int("status-code", resp.StatusCode), zap.String("body", string(body)), zap.Error(err))
return errors.New("failed to send metric to Appsflyer") return errors.New("failed to send metric to Appsflyer")
} }

View File

@ -9,6 +9,7 @@ import (
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"github.com/status-im/status-go/centralizedmetrics/common" "github.com/status-im/status-go/centralizedmetrics/common"
"github.com/status-im/status-go/protocol/tt"
) )
func TestAppsflyerMetricProcessor(t *testing.T) { func TestAppsflyerMetricProcessor(t *testing.T) {
@ -42,7 +43,7 @@ func TestAppsflyerMetricProcessor(t *testing.T) {
defer testServer.Close() defer testServer.Close()
// Initialize the AppsflyerMetricProcessor with the test server URL // Initialize the AppsflyerMetricProcessor with the test server URL
processor := NewAppsflyerMetricProcessor("testAppID", "testSecret", testServer.URL) processor := NewAppsflyerMetricProcessor("testAppID", "testSecret", testServer.URL, tt.MustCreateTestLogger())
// Example metrics // Example metrics
metrics := []common.Metric{ metrics := []common.Metric{

View File

@ -8,7 +8,7 @@ import (
"io" "io"
"net/http" "net/http"
"github.com/ethereum/go-ethereum/log" "go.uber.org/zap"
"github.com/status-im/status-go/centralizedmetrics/common" "github.com/status-im/status-go/centralizedmetrics/common"
) )
@ -23,14 +23,17 @@ type MixpanelMetricProcessor struct {
appID string appID string
secret string secret string
baseURL string baseURL string
logger *zap.Logger
} }
// NewMixpanelMetricProcessor is a constructor for MixpanelMetricProcessor // NewMixpanelMetricProcessor is a constructor for MixpanelMetricProcessor
func NewMixpanelMetricProcessor(appID, secret, baseURL string) *MixpanelMetricProcessor { func NewMixpanelMetricProcessor(appID, secret, baseURL string, logger *zap.Logger) *MixpanelMetricProcessor {
return &MixpanelMetricProcessor{ return &MixpanelMetricProcessor{
appID: appID, appID: appID,
secret: secret, secret: secret,
baseURL: baseURL, baseURL: baseURL,
logger: logger,
} }
} }
@ -71,7 +74,7 @@ func (amp *MixpanelMetricProcessor) sendToMixpanel(metrics []common.Metric) erro
return err return err
} }
log.Info("sending metrics to", "url", url, "metric", mixPanelMetrics, "secret", amp.GetToken()) amp.logger.Info("sending metrics to", zap.String("url", url), zap.Any("metric", mixPanelMetrics), zap.String("secret", amp.GetToken()))
req, err := http.NewRequest("POST", url, bytes.NewBuffer(payload)) req, err := http.NewRequest("POST", url, bytes.NewBuffer(payload))
if err != nil { if err != nil {
@ -90,8 +93,7 @@ func (amp *MixpanelMetricProcessor) sendToMixpanel(metrics []common.Metric) erro
if resp.StatusCode != http.StatusOK { if resp.StatusCode != http.StatusOK {
body, err := io.ReadAll(resp.Body) body, err := io.ReadAll(resp.Body)
fmt.Println(resp.StatusCode, string(body), err) amp.logger.Warn("failed to send metric", zap.Int("status-code", resp.StatusCode), zap.String("body", string(body)), zap.Error(err))
log.Warn("failed to send metric", "status-code", resp.StatusCode, "body", resp.Body)
return errors.New("failed to send metric to Mixpanel") return errors.New("failed to send metric to Mixpanel")
} }

View File

@ -9,6 +9,7 @@ import (
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"github.com/status-im/status-go/centralizedmetrics/common" "github.com/status-im/status-go/centralizedmetrics/common"
"github.com/status-im/status-go/protocol/tt"
) )
func TestMixpanelMetricProcessor(t *testing.T) { func TestMixpanelMetricProcessor(t *testing.T) {
@ -55,7 +56,7 @@ func TestMixpanelMetricProcessor(t *testing.T) {
defer testServer.Close() defer testServer.Close()
// Initialize the MixpanelMetricProcessor with the test server URL // Initialize the MixpanelMetricProcessor with the test server URL
processor := NewMixpanelMetricProcessor("testAppID", "testSecret", testServer.URL) processor := NewMixpanelMetricProcessor("testAppID", "testSecret", testServer.URL, tt.MustCreateTestLogger())
// Example metrics // Example metrics
metrics := []common.Metric{ metrics := []common.Metric{

View File

@ -6,8 +6,9 @@ import (
"time" "time"
"github.com/afex/hystrix-go/hystrix" "github.com/afex/hystrix-go/hystrix"
"go.uber.org/zap"
"github.com/ethereum/go-ethereum/log" "github.com/status-im/status-go/logutils"
) )
type FallbackFunc func() ([]any, error) type FallbackFunc func() ([]any, error)
@ -177,7 +178,7 @@ func (cb *CircuitBreaker) Execute(cmd *Command) CommandResult {
return nil return nil
} }
if err != nil { if err != nil {
log.Warn("hystrix error", "error", err, "provider", circuitName) logutils.ZapLogger().Warn("hystrix error", zap.String("provider", circuitName), zap.Error(err))
} }
return err return err
}, nil) }, nil)

View File

@ -0,0 +1,245 @@
package analyzer
import (
"context"
"fmt"
"go/ast"
"os"
"go.uber.org/zap"
goparser "go/parser"
gotoken "go/token"
"strings"
"github.com/pkg/errors"
"golang.org/x/tools/go/analysis"
"golang.org/x/tools/go/analysis/passes/inspect"
"golang.org/x/tools/go/ast/inspector"
"github.com/status-im/status-go/cmd/lint-panics/gopls"
"github.com/status-im/status-go/cmd/lint-panics/utils"
)
const Pattern = "LogOnPanic"
type Analyzer struct {
logger *zap.Logger
lsp LSP
cfg *Config
}
type LSP interface {
Definition(context.Context, string, int, int) (string, int, error)
}
func New(ctx context.Context, logger *zap.Logger) (*analysis.Analyzer, error) {
cfg := Config{}
flags, err := cfg.ParseFlags()
if err != nil {
return nil, err
}
logger.Info("creating analyzer", zap.String("root", cfg.RootDir))
goplsClient := gopls.NewGoplsClient(ctx, logger, cfg.RootDir)
processor := newAnalyzer(logger, goplsClient, &cfg)
analyzer := &analysis.Analyzer{
Name: "logpanics",
Doc: fmt.Sprintf("reports missing defer call to %s", Pattern),
Flags: flags,
Requires: []*analysis.Analyzer{inspect.Analyzer},
Run: func(pass *analysis.Pass) (interface{}, error) {
return processor.Run(ctx, pass)
},
}
return analyzer, nil
}
func newAnalyzer(logger *zap.Logger, lsp LSP, cfg *Config) *Analyzer {
return &Analyzer{
logger: logger.Named("processor"),
lsp: lsp,
cfg: cfg.WithAbsolutePaths(),
}
}
func (p *Analyzer) Run(ctx context.Context, pass *analysis.Pass) (interface{}, error) {
inspected, ok := pass.ResultOf[inspect.Analyzer].(*inspector.Inspector)
if !ok {
return nil, errors.New("analyzer is not type *inspector.Inspector")
}
// Create a nodes filter for goroutines (GoStmt represents a 'go' statement)
nodeFilter := []ast.Node{
(*ast.GoStmt)(nil),
}
// Inspect go statements
inspected.Preorder(nodeFilter, func(n ast.Node) {
p.ProcessNode(ctx, pass, n)
})
return nil, nil
}
func (p *Analyzer) ProcessNode(ctx context.Context, pass *analysis.Pass, n ast.Node) {
goStmt, ok := n.(*ast.GoStmt)
if !ok {
panic("unexpected node type")
}
switch fun := goStmt.Call.Fun.(type) {
case *ast.FuncLit: // anonymous function
pos := pass.Fset.Position(fun.Pos())
logger := p.logger.With(
utils.ZapURI(pos.Filename, pos.Line),
zap.Int("column", pos.Column),
)
logger.Debug("found anonymous goroutine")
if err := p.checkGoroutine(fun.Body); err != nil {
p.logLinterError(pass, fun.Pos(), fun.Pos(), err)
}
case *ast.SelectorExpr: // method call
pos := pass.Fset.Position(fun.Sel.Pos())
p.logger.Info("found method call as goroutine",
zap.String("methodName", fun.Sel.Name),
utils.ZapURI(pos.Filename, pos.Line),
zap.Int("column", pos.Column),
)
defPos, err := p.checkGoroutineDefinition(ctx, pos, pass)
if err != nil {
p.logLinterError(pass, defPos, fun.Sel.Pos(), err)
}
case *ast.Ident: // function call
pos := pass.Fset.Position(fun.Pos())
p.logger.Info("found function call as goroutine",
zap.String("functionName", fun.Name),
utils.ZapURI(pos.Filename, pos.Line),
zap.Int("column", pos.Column),
)
defPos, err := p.checkGoroutineDefinition(ctx, pos, pass)
if err != nil {
p.logLinterError(pass, defPos, fun.Pos(), err)
}
default:
p.logger.Error("unexpected goroutine type",
zap.String("type", fmt.Sprintf("%T", fun)),
)
}
}
func (p *Analyzer) parseFile(path string, pass *analysis.Pass) (*ast.File, error) {
logger := p.logger.With(zap.String("path", path))
src, err := os.ReadFile(path)
if err != nil {
logger.Error("failed to open file", zap.Error(err))
}
file, err := goparser.ParseFile(pass.Fset, path, src, 0)
if err != nil {
logger.Error("failed to parse file", zap.Error(err))
return nil, err
}
return file, nil
}
func (p *Analyzer) checkGoroutine(body *ast.BlockStmt) error {
if body == nil {
p.logger.Warn("missing function body")
return nil
}
if len(body.List) == 0 {
// empty goroutine is weird, but it never panics, so not a linter error
return nil
}
deferStatement, ok := body.List[0].(*ast.DeferStmt)
if !ok {
return errors.New("first statement is not defer")
}
selectorExpr, ok := deferStatement.Call.Fun.(*ast.SelectorExpr)
if !ok {
return errors.New("first statement call is not a selector")
}
firstLineFunName := selectorExpr.Sel.Name
if firstLineFunName != Pattern {
return errors.Errorf("first statement is not %s", Pattern)
}
return nil
}
func (p *Analyzer) getFunctionBody(node ast.Node, lineNumber int, pass *analysis.Pass) (body *ast.BlockStmt, pos gotoken.Pos) {
ast.Inspect(node, func(n ast.Node) bool {
// Check if the node is a function declaration
funcDecl, ok := n.(*ast.FuncDecl)
if !ok {
return true
}
if pass.Fset.Position(n.Pos()).Line != lineNumber {
return true
}
body = funcDecl.Body
pos = n.Pos()
return false
})
return body, pos
}
func (p *Analyzer) checkGoroutineDefinition(ctx context.Context, pos gotoken.Position, pass *analysis.Pass) (gotoken.Pos, error) {
defFilePath, defLineNumber, err := p.lsp.Definition(ctx, pos.Filename, pos.Line, pos.Column)
if err != nil {
p.logger.Error("failed to find function definition", zap.Error(err))
return 0, err
}
file, err := p.parseFile(defFilePath, pass)
if err != nil {
p.logger.Error("failed to parse file", zap.Error(err))
return 0, err
}
body, defPosition := p.getFunctionBody(file, defLineNumber, pass)
return defPosition, p.checkGoroutine(body)
}
func (p *Analyzer) logLinterError(pass *analysis.Pass, errPos gotoken.Pos, callPos gotoken.Pos, err error) {
errPosition := pass.Fset.Position(errPos)
callPosition := pass.Fset.Position(callPos)
if p.skip(errPosition.Filename) || p.skip(callPosition.Filename) {
return
}
message := fmt.Sprintf("missing %s()", Pattern)
p.logger.Warn(message,
utils.ZapURI(errPosition.Filename, errPosition.Line),
zap.String("details", err.Error()))
if callPos == errPos {
pass.Reportf(errPos, "missing defer call to %s", Pattern)
} else {
pass.Reportf(callPos, "missing defer call to %s", Pattern)
}
}
func (p *Analyzer) skip(filepath string) bool {
return p.cfg.SkipDir != "" && strings.HasPrefix(filepath, p.cfg.SkipDir)
}

View File

@ -0,0 +1,28 @@
package analyzer
import (
"context"
"testing"
"time"
"github.com/stretchr/testify/require"
"go.uber.org/zap"
"golang.org/x/tools/go/analysis/analysistest"
"github.com/status-im/status-go/cmd/lint-panics/utils"
)
func TestMethods(t *testing.T) {
t.Parallel()
logger := utils.BuildLogger(zap.DebugLevel)
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Minute)
defer cancel()
a, err := New(ctx, logger)
require.NoError(t, err)
analysistest.Run(t, analysistest.TestData(), a, "functions")
}

View File

@ -0,0 +1,60 @@
package analyzer
import (
"flag"
"io"
"os"
"path"
"strings"
)
type Config struct {
RootDir string
SkipDir string
}
var workdir string
func init() {
var err error
workdir, err = os.Getwd()
if err != nil {
panic(err)
}
}
func (c *Config) ParseFlags() (flag.FlagSet, error) {
flags := flag.NewFlagSet("lint-panics", flag.ContinueOnError)
flags.SetOutput(io.Discard) // Otherwise errors are printed to stderr
flags.StringVar(&c.RootDir, "root", workdir, "root directory to run gopls")
flags.StringVar(&c.SkipDir, "skip", "", "skip paths with this suffix")
// We parse the flags here to have `rootDir` before the call to `singlechecker.Main(analyzer)`
// For same reasons we discard the output and skip the undefined flag error.
err := flags.Parse(os.Args[1:])
if err == nil {
return *flags, nil
}
if strings.Contains(err.Error(), "flag provided but not defined") {
err = nil
} else if strings.Contains(err.Error(), "help requested") {
err = nil
}
return *flags, err
}
func (c *Config) WithAbsolutePaths() *Config {
out := *c
if !path.IsAbs(out.RootDir) {
out.RootDir = path.Join(workdir, out.RootDir)
}
if out.SkipDir != "" && !path.IsAbs(out.SkipDir) {
out.SkipDir = path.Join(out.RootDir, out.SkipDir)
}
return &out
}

View File

@ -0,0 +1,5 @@
package common
func LogOnPanic() {
// do nothing
}

View File

@ -0,0 +1,24 @@
package functions
import (
"common"
"fmt"
)
func init() {
go func() {
defer common.LogOnPanic()
}()
go func() {
}()
go func() { // want "missing defer call to LogOnPanic"
fmt.Println("anon")
}()
go func() { // want "missing defer call to LogOnPanic"
common.LogOnPanic()
}()
}

View File

@ -0,0 +1,29 @@
package functions
import (
"common"
"fmt"
)
func init() {
go ok()
go empty()
go noLogOnPanic() // want "missing defer call to LogOnPanic"
go notDefer() // want "missing defer call to LogOnPanic"
}
func ok() {
defer common.LogOnPanic()
}
func empty() {
}
func noLogOnPanic() {
defer fmt.Println("Bar")
}
func notDefer() {
common.LogOnPanic()
}

View File

@ -0,0 +1,33 @@
package functions
import (
"common"
"fmt"
)
type Test struct {
}
func init() {
t := Test{}
go t.ok()
go t.empty()
go t.noLogOnPanic() // want "missing defer call to LogOnPanic"
go t.notDefer() // want "missing defer call to LogOnPanic"
}
func (p *Test) ok() {
defer common.LogOnPanic()
}
func (p *Test) empty() {
}
func (p *Test) noLogOnPanic() {
defer fmt.Println("FooNoLogOnPanic")
}
func (p *Test) notDefer() {
common.LogOnPanic()
}

View File

@ -0,0 +1,21 @@
package functions
import (
"common"
)
func init() {
runAsync(ok)
runAsyncOk(ok)
}
func runAsync(fn func()) {
go fn() // want "missing defer call to LogOnPanic"
}
func runAsyncOk(fn func()) {
go func() {
defer common.LogOnPanic()
fn()
}()
}

View File

@ -0,0 +1,81 @@
package gopls
import (
"context"
"go.lsp.dev/protocol"
"go.uber.org/zap"
)
type DummyClient struct {
logger *zap.Logger
}
func NewDummyClient(logger *zap.Logger) *DummyClient {
if logger == nil {
logger = zap.NewNop()
}
return &DummyClient{
logger: logger,
}
}
func (d *DummyClient) Progress(ctx context.Context, params *protocol.ProgressParams) (err error) {
d.logger.Debug("client: Progress", zap.Any("params", params))
return
}
func (d *DummyClient) WorkDoneProgressCreate(ctx context.Context, params *protocol.WorkDoneProgressCreateParams) (err error) {
d.logger.Debug("client: WorkDoneProgressCreate")
return nil
}
func (d *DummyClient) LogMessage(ctx context.Context, params *protocol.LogMessageParams) (err error) {
d.logger.Debug("client: LogMessage", zap.Any("message", params))
return nil
}
func (d *DummyClient) PublishDiagnostics(ctx context.Context, params *protocol.PublishDiagnosticsParams) (err error) {
d.logger.Debug("client: PublishDiagnostics")
return nil
}
func (d *DummyClient) ShowMessage(ctx context.Context, params *protocol.ShowMessageParams) (err error) {
d.logger.Debug("client: ShowMessage", zap.Any("message", params))
return nil
}
func (d *DummyClient) ShowMessageRequest(ctx context.Context, params *protocol.ShowMessageRequestParams) (result *protocol.MessageActionItem, err error) {
d.logger.Debug("client: ShowMessageRequest", zap.Any("message", params))
return nil, nil
}
func (d *DummyClient) Telemetry(ctx context.Context, params interface{}) (err error) {
d.logger.Debug("client: Telemetry")
return nil
}
func (d *DummyClient) RegisterCapability(ctx context.Context, params *protocol.RegistrationParams) (err error) {
d.logger.Debug("client: RegisterCapability")
return nil
}
func (d *DummyClient) UnregisterCapability(ctx context.Context, params *protocol.UnregistrationParams) (err error) {
d.logger.Debug("client: UnregisterCapability")
return nil
}
func (d *DummyClient) ApplyEdit(ctx context.Context, params *protocol.ApplyWorkspaceEditParams) (result bool, err error) {
d.logger.Debug("client: ApplyEdit")
return false, nil
}
func (d *DummyClient) Configuration(ctx context.Context, params *protocol.ConfigurationParams) (result []interface{}, err error) {
d.logger.Debug("client: Configuration")
return nil, nil
}
func (d *DummyClient) WorkspaceFolders(ctx context.Context) (result []protocol.WorkspaceFolder, err error) {
d.logger.Debug("client: WorkspaceFolders")
return nil, nil
}

View File

@ -0,0 +1,155 @@
package gopls
import (
"os/exec"
"github.com/pkg/errors"
"context"
"go.lsp.dev/jsonrpc2"
"go.lsp.dev/protocol"
"time"
"go.lsp.dev/uri"
"go.uber.org/zap"
)
type Connection struct {
logger *zap.Logger
server protocol.Server
cmd *exec.Cmd
conn jsonrpc2.Conn
}
func NewGoplsClient(ctx context.Context, logger *zap.Logger, rootDir string) *Connection {
var err error
logger.Debug("initializing gopls client")
gopls := &Connection{
logger: logger,
}
client := NewDummyClient(logger)
// Step 1: Create a JSON-RPC connection using stdin and stdout
gopls.cmd = exec.Command("gopls", "serve")
stdin, err := gopls.cmd.StdinPipe()
if err != nil {
logger.Error("Failed to get stdin pipe", zap.Error(err))
panic(err)
}
stdout, err := gopls.cmd.StdoutPipe()
if err != nil {
logger.Error("Failed to get stdout pipe", zap.Error(err))
panic(err)
}
err = gopls.cmd.Start()
if err != nil {
logger.Error("Failed to start gopls", zap.Error(err))
panic(err)
}
stream := jsonrpc2.NewStream(&IOStream{
stdin: stdin,
stdout: stdout,
})
// Step 2: Create a client for the running gopls server
ctx, gopls.conn, gopls.server = protocol.NewClient(ctx, client, stream, logger)
// Step 3: Initialize the gopls server
initParams := protocol.InitializeParams{
RootURI: uri.From("file", "", rootDir, "", ""),
InitializationOptions: map[string]interface{}{
"symbolMatcher": "FastFuzzy",
},
}
_, err = gopls.server.Initialize(ctx, &initParams)
if err != nil {
logger.Error("Error during initialize", zap.Error(err))
panic(err)
}
// Step 4: Send 'initialized' notification
err = gopls.server.Initialized(ctx, &protocol.InitializedParams{})
if err != nil {
logger.Error("Error during initialized", zap.Error(err))
panic(err)
}
return gopls
}
func (gopls *Connection) Definition(ctx context.Context, filePath string, lineNumber int, charPosition int) (string, int, error) {
// NOTE: gopls uses 0-based line and column numbers
defFile, defLine, err := gopls.definition(ctx, filePath, lineNumber-1, charPosition-1)
return defFile, defLine + 1, err
}
func (gopls *Connection) definition(ctx context.Context, filePath string, lineNumber int, charPosition int) (string, int, error) {
// Define the file URI and position where the function/method is invoked
fileURI := protocol.DocumentURI("file://" + filePath) // Replace with actual file URI
line := lineNumber // Line number where the function is called
character := charPosition // Character (column) where the function is called
// Send the definition request
params := &protocol.DefinitionParams{
TextDocumentPositionParams: protocol.TextDocumentPositionParams{
TextDocument: protocol.TextDocumentIdentifier{
URI: fileURI,
},
Position: protocol.Position{
Line: uint32(line),
Character: uint32(character),
},
},
}
// Create context with a timeout to avoid hanging
ctx, cancel := context.WithTimeout(ctx, 10*time.Second)
defer cancel()
locations, err := gopls.server.Definition(ctx, params)
if err != nil {
return "", 0, errors.Wrap(err, "failed to fetch definition")
}
if len(locations) == 0 {
return "", 0, errors.New("no definition found")
}
location := locations[0]
return location.URI.Filename(), int(location.Range.Start.Line), nil
}
func (gopls *Connection) DidOpen(ctx context.Context, path string, content string, logger *zap.Logger) {
err := gopls.server.DidOpen(ctx, &protocol.DidOpenTextDocumentParams{
TextDocument: protocol.TextDocumentItem{
URI: protocol.DocumentURI(path),
LanguageID: "go",
Version: 1,
Text: content,
},
})
if err != nil {
logger.Error("failed to call DidOpen", zap.Error(err))
}
}
func (gopls *Connection) DidClose(ctx context.Context, path string, lgoger *zap.Logger) {
err := gopls.server.DidClose(ctx, &protocol.DidCloseTextDocumentParams{
TextDocument: protocol.TextDocumentIdentifier{
URI: protocol.DocumentURI(path),
},
})
if err != nil {
lgoger.Error("failed to call DidClose", zap.Error(err))
}
}

View File

@ -0,0 +1,29 @@
package gopls
import "io"
// IOStream combines stdin and stdout into one interface.
type IOStream struct {
stdin io.WriteCloser
stdout io.ReadCloser
}
// Write writes data to stdin.
func (c *IOStream) Write(p []byte) (n int, err error) {
return c.stdin.Write(p)
}
// Read reads data from stdout.
func (c *IOStream) Read(p []byte) (n int, err error) {
return c.stdout.Read(p)
}
// Close closes both stdin and stdout.
func (c *IOStream) Close() error {
err1 := c.stdin.Close()
err2 := c.stdout.Close()
if err1 != nil {
return err1
}
return err2
}

35
cmd/lint-panics/main.go Normal file
View File

@ -0,0 +1,35 @@
package main
import (
"context"
"os"
"time"
"go.uber.org/zap"
"golang.org/x/tools/go/analysis/singlechecker"
"github.com/status-im/status-go/cmd/lint-panics/analyzer"
"github.com/status-im/status-go/cmd/lint-panics/utils"
)
/*
Run with `-root=<directory>` to specify the root directory to run gopls. Defaults to the current working directory.
Set `-skip=<directory>` to skip errors in certain directories. If relative, it is relative to the root directory.
If provided, `-root` and `-skip` arguments MUST go first, before any other args.
*/
func main() {
logger := utils.BuildLogger(zap.ErrorLevel)
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
defer cancel()
a, err := analyzer.New(ctx, logger)
if err != nil {
logger.Error("failed to create analyzer", zap.Error(err))
os.Exit(1)
}
singlechecker.Main(a)
}

View File

@ -0,0 +1,39 @@
package utils
import (
"strconv"
"fmt"
"os"
"go.uber.org/zap"
"go.uber.org/zap/zapcore"
)
func URI(path string, line int) string {
return path + ":" + strconv.Itoa(line)
}
func ZapURI(path string, line int) zap.Field {
return zap.Field{
Type: zapcore.StringType,
Key: "uri",
String: URI(path, line),
}
}
func BuildLogger(level zapcore.Level) *zap.Logger {
// Initialize logger with colors
loggerConfig := zap.NewDevelopmentConfig()
loggerConfig.EncoderConfig.EncodeLevel = zapcore.CapitalColorLevelEncoder
loggerConfig.Level = zap.NewAtomicLevelAt(level)
loggerConfig.Development = false
loggerConfig.DisableStacktrace = true
logger, err := loggerConfig.Build()
if err != nil {
fmt.Printf("failed to initialize logger: %s", err.Error())
os.Exit(1)
}
return logger.Named("main")
}

View File

@ -156,7 +156,7 @@ func startClientNode() (*api.GethStatusBackend, error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
clientBackend := api.NewGethStatusBackend() clientBackend := api.NewGethStatusBackend(logutils.ZapLogger())
err = clientBackend.AccountManager().InitKeystore(config.KeyStoreDir) err = clientBackend.AccountManager().InitKeystore(config.KeyStoreDir)
if err != nil { if err != nil {
return nil, err return nil, err

View File

@ -123,7 +123,7 @@ func main() {
return return
} }
backend := api.NewGethStatusBackend() backend := api.NewGethStatusBackend(logutils.ZapLogger())
err = ImportAccount(*seedPhrase, backend) err = ImportAccount(*seedPhrase, backend)
if err != nil { if err != nil {
logger.Error("failed import account", "err", err) logger.Error("failed import account", "err", err)

View File

@ -132,7 +132,7 @@ func main() {
return return
} }
backend := api.NewGethStatusBackend() backend := api.NewGethStatusBackend(logutils.ZapLogger())
err = ImportAccount(*seedPhrase, backend) err = ImportAccount(*seedPhrase, backend)
if err != nil { if err != nil {
logger.Error("failed import account", "err", err) logger.Error("failed import account", "err", err)

View File

@ -127,7 +127,7 @@ func main() {
profiling.NewProfiler(*pprofPort).Go() profiling.NewProfiler(*pprofPort).Go()
} }
backend := api.NewGethStatusBackend() backend := api.NewGethStatusBackend(logutils.ZapLogger())
err = ImportAccount(*seedPhrase, backend) err = ImportAccount(*seedPhrase, backend)
if err != nil { if err != nil {
logger.Error("failed import account", "err", err) logger.Error("failed import account", "err", err)

View File

@ -55,7 +55,7 @@ func start(p StartParams, logger *zap.SugaredLogger) (*StatusCLI, error) {
setupLogger(p.Name) setupLogger(p.Name)
logger.Info("starting messenger") logger.Info("starting messenger")
backend := api.NewGethStatusBackend() backend := api.NewGethStatusBackend(logutils.ZapLogger())
if p.KeyUID != "" { if p.KeyUID != "" {
if err := getAccountAndLogin(backend, p.Name, rootDataDir, password, p.KeyUID); err != nil { if err := getAccountAndLogin(backend, p.Name, rootDataDir, password, p.KeyUID); err != nil {
return nil, err return nil, err
@ -81,7 +81,7 @@ func start(p StartParams, logger *zap.SugaredLogger) (*StatusCLI, error) {
} }
waku := backend.StatusNode().WakuV2Service() waku := backend.StatusNode().WakuV2Service()
telemetryClient := telemetry.NewClient(telemetryLogger, p.TelemetryURL, backend.SelectedAccountKeyID(), p.Name, "cli", telemetry.WithPeerID(waku.PeerID().String())) telemetryClient := telemetry.NewClient(telemetryLogger, p.TelemetryURL, backend.SelectedAccountKeyID(), p.Name, "cli", telemetry.WithPeerID(waku.PeerID().String()))
go telemetryClient.Start(context.Background()) telemetryClient.Start(context.Background())
backend.StatusNode().WakuV2Service().SetStatusTelemetryClient(telemetryClient) backend.StatusNode().WakuV2Service().SetStatusTelemetryClient(telemetryClient)
} }
wakuAPI := wakuv2ext.NewPublicAPI(wakuService) wakuAPI := wakuv2ext.NewPublicAPI(wakuService)

View File

@ -187,7 +187,7 @@ func main() {
}() }()
} }
backend := api.NewGethStatusBackend() backend := api.NewGethStatusBackend(logutils.ZapLogger())
if config.NodeKey == "" { if config.NodeKey == "" {
logger.Error("node key needs to be set if running a push notification server") logger.Error("node key needs to be set if running a push notification server")
return return

View File

@ -4,7 +4,7 @@ import (
"database/sql" "database/sql"
"errors" "errors"
"github.com/ethereum/go-ethereum/log" "github.com/status-im/status-go/logutils"
) )
const InMemoryPath = ":memory:" const InMemoryPath = ":memory:"
@ -22,8 +22,7 @@ type DatabaseInitializer interface {
// GetDBFilename takes an instance of sql.DB and returns the filename of the "main" database // GetDBFilename takes an instance of sql.DB and returns the filename of the "main" database
func GetDBFilename(db *sql.DB) (string, error) { func GetDBFilename(db *sql.DB) (string, error) {
if db == nil { if db == nil {
logger := log.New() logutils.ZapLogger().Warn("GetDBFilename was passed a nil pointer sql.DB")
logger.Warn("GetDBFilename was passed a nil pointer sql.DB")
return "", nil return "", nil
} }

View File

@ -5,11 +5,12 @@ import (
"errors" "errors"
"reflect" "reflect"
"regexp" "regexp"
"runtime/debug"
"strings" "strings"
"github.com/ethereum/go-ethereum/log" "go.uber.org/zap"
"github.com/status-im/status-go/eth-node/crypto" "github.com/status-im/status-go/eth-node/crypto"
"github.com/status-im/status-go/logutils"
"github.com/status-im/status-go/protocol/identity/alias" "github.com/status-im/status-go/protocol/identity/alias"
"github.com/status-im/status-go/protocol/protobuf" "github.com/status-im/status-go/protocol/protobuf"
) )
@ -89,7 +90,7 @@ func IsNil(i interface{}) bool {
func LogOnPanic() { func LogOnPanic() {
if err := recover(); err != nil { if err := recover(); err != nil {
log.Error("panic in goroutine", "error", err, "stacktrace", string(debug.Stack())) logutils.ZapLogger().Error("panic in goroutine", zap.Any("error", err), zap.Stack("stacktrace"))
panic(err) panic(err)
} }
} }

View File

@ -9,8 +9,9 @@ import (
"github.com/syndtr/goleveldb/leveldb/opt" "github.com/syndtr/goleveldb/leveldb/opt"
"github.com/syndtr/goleveldb/leveldb/storage" "github.com/syndtr/goleveldb/leveldb/storage"
"github.com/syndtr/goleveldb/leveldb/util" "github.com/syndtr/goleveldb/leveldb/util"
"go.uber.org/zap"
"github.com/ethereum/go-ethereum/log" "github.com/status-im/status-go/logutils"
) )
type storagePrefix byte type storagePrefix byte
@ -84,7 +85,7 @@ func Create(path, dbName string) (*leveldb.DB, error) {
func Open(path string, opts *opt.Options) (db *leveldb.DB, err error) { func Open(path string, opts *opt.Options) (db *leveldb.DB, err error) {
db, err = leveldb.OpenFile(path, opts) db, err = leveldb.OpenFile(path, opts)
if _, iscorrupted := err.(*errors.ErrCorrupted); iscorrupted { if _, iscorrupted := err.(*errors.ErrCorrupted); iscorrupted {
log.Info("database is corrupted trying to recover", "path", path) logutils.ZapLogger().Info("database is corrupted trying to recover", zap.String("path", path))
db, err = leveldb.RecoverFile(path, nil) db, err = leveldb.RecoverFile(path, nil)
} }
return return

View File

@ -6,8 +6,10 @@ import (
"sync" "sync"
"time" "time"
"github.com/ethereum/go-ethereum/log" "go.uber.org/zap"
"github.com/ethereum/go-ethereum/p2p/discv5" "github.com/ethereum/go-ethereum/p2p/discv5"
"github.com/status-im/status-go/logutils"
) )
// NewDiscV5 creates instances of discovery v5 facade. // NewDiscV5 creates instances of discovery v5 facade.
@ -40,7 +42,7 @@ func (d *DiscV5) Running() bool {
func (d *DiscV5) Start() error { func (d *DiscV5) Start() error {
d.mu.Lock() d.mu.Lock()
defer d.mu.Unlock() defer d.mu.Unlock()
log.Debug("Starting discovery", "listen address", d.laddr) logutils.ZapLogger().Debug("Starting discovery", zap.String("listen address", d.laddr))
addr, err := net.ResolveUDPAddr("udp", d.laddr) addr, err := net.ResolveUDPAddr("udp", d.laddr)
if err != nil { if err != nil {
return err return err

6
go.mod
View File

@ -99,6 +99,9 @@ require (
github.com/wk8/go-ordered-map/v2 v2.1.7 github.com/wk8/go-ordered-map/v2 v2.1.7
github.com/yeqown/go-qrcode/v2 v2.2.1 github.com/yeqown/go-qrcode/v2 v2.2.1
github.com/yeqown/go-qrcode/writer/standard v1.2.1 github.com/yeqown/go-qrcode/writer/standard v1.2.1
go.lsp.dev/jsonrpc2 v0.10.0
go.lsp.dev/protocol v0.12.0
go.lsp.dev/uri v0.3.0
go.uber.org/mock v0.4.0 go.uber.org/mock v0.4.0
go.uber.org/multierr v1.11.0 go.uber.org/multierr v1.11.0
golang.org/x/exp v0.0.0-20240808152545-0cdaa3abc0fa golang.org/x/exp v0.0.0-20240808152545-0cdaa3abc0fa
@ -253,6 +256,8 @@ require (
github.com/russolsen/ohyeah v0.0.0-20160324131710-f4938c005315 // indirect github.com/russolsen/ohyeah v0.0.0-20160324131710-f4938c005315 // indirect
github.com/russolsen/same v0.0.0-20160222130632-f089df61f51d // indirect github.com/russolsen/same v0.0.0-20160222130632-f089df61f51d // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/segmentio/asm v1.1.3 // indirect
github.com/segmentio/encoding v0.3.4 // indirect
github.com/shirou/gopsutil v3.21.11+incompatible // indirect github.com/shirou/gopsutil v3.21.11+incompatible // indirect
github.com/shopspring/decimal v1.2.0 // indirect github.com/shopspring/decimal v1.2.0 // indirect
github.com/spaolacci/murmur3 v1.1.0 // indirect github.com/spaolacci/murmur3 v1.1.0 // indirect
@ -275,6 +280,7 @@ require (
github.com/yeqown/reedsolomon v1.0.0 // indirect github.com/yeqown/reedsolomon v1.0.0 // indirect
github.com/yusufpapurcu/wmi v1.2.3 // indirect github.com/yusufpapurcu/wmi v1.2.3 // indirect
go.etcd.io/bbolt v1.3.6 // indirect go.etcd.io/bbolt v1.3.6 // indirect
go.lsp.dev/pkg v0.0.0-20210717090340-384b27a52fb2 // indirect
go.uber.org/atomic v1.11.0 // indirect go.uber.org/atomic v1.11.0 // indirect
go.uber.org/dig v1.18.0 // indirect go.uber.org/dig v1.18.0 // indirect
go.uber.org/fx v1.22.2 // indirect go.uber.org/fx v1.22.2 // indirect

13
go.sum
View File

@ -1936,6 +1936,10 @@ github.com/sclevine/spec v1.2.0/go.mod h1:W4J29eT/Kzv7/b9IWLB055Z+qvVC9vt0Arko24
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc= github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
github.com/seccomp/libseccomp-golang v0.9.1/go.mod h1:GbW5+tmTXfcxTToHLXlScSlAvWlF4P2Ca7zGrPiEpWo= github.com/seccomp/libseccomp-golang v0.9.1/go.mod h1:GbW5+tmTXfcxTToHLXlScSlAvWlF4P2Ca7zGrPiEpWo=
github.com/seccomp/libseccomp-golang v0.9.2-0.20210429002308-3879420cc921/go.mod h1:JA8cRccbGaA1s33RQf7Y1+q9gHmZX1yB/z9WDN1C6fg= github.com/seccomp/libseccomp-golang v0.9.2-0.20210429002308-3879420cc921/go.mod h1:JA8cRccbGaA1s33RQf7Y1+q9gHmZX1yB/z9WDN1C6fg=
github.com/segmentio/asm v1.1.3 h1:WM03sfUOENvvKexOLp+pCqgb/WDjsi7EK8gIsICtzhc=
github.com/segmentio/asm v1.1.3/go.mod h1:Ld3L4ZXGNcSLRg4JBsZ3//1+f/TjYl0Mzen/DQy1EJg=
github.com/segmentio/encoding v0.3.4 h1:WM4IBnxH8B9TakiM2QD5LyNl9JSndh88QbHqVC+Pauc=
github.com/segmentio/encoding v0.3.4/go.mod h1:n0JeuIqEQrQoPDGsjo8UNd1iA0U8d8+oHAA4E3G3OxM=
github.com/segmentio/kafka-go v0.1.0/go.mod h1:X6itGqS9L4jDletMsxZ7Dz+JFWxM6JHfPOCvTvk+EJo= github.com/segmentio/kafka-go v0.1.0/go.mod h1:X6itGqS9L4jDletMsxZ7Dz+JFWxM6JHfPOCvTvk+EJo=
github.com/segmentio/kafka-go v0.2.0/go.mod h1:X6itGqS9L4jDletMsxZ7Dz+JFWxM6JHfPOCvTvk+EJo= github.com/segmentio/kafka-go v0.2.0/go.mod h1:X6itGqS9L4jDletMsxZ7Dz+JFWxM6JHfPOCvTvk+EJo=
github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo= github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
@ -2221,6 +2225,14 @@ go.etcd.io/etcd/client/v3 v3.5.0/go.mod h1:AIKXXVX/DQXtfTEqBryiLTUXwON+GuvO6Z7lL
go.etcd.io/etcd/pkg/v3 v3.5.0/go.mod h1:UzJGatBQ1lXChBkQF0AuAtkRQMYnHubxAEYIrC3MSsE= go.etcd.io/etcd/pkg/v3 v3.5.0/go.mod h1:UzJGatBQ1lXChBkQF0AuAtkRQMYnHubxAEYIrC3MSsE=
go.etcd.io/etcd/raft/v3 v3.5.0/go.mod h1:UFOHSIvO/nKwd4lhkwabrTD3cqW5yVyYYf/KlD00Szc= go.etcd.io/etcd/raft/v3 v3.5.0/go.mod h1:UFOHSIvO/nKwd4lhkwabrTD3cqW5yVyYYf/KlD00Szc=
go.etcd.io/etcd/server/v3 v3.5.0/go.mod h1:3Ah5ruV+M+7RZr0+Y/5mNLwC+eQlni+mQmOVdCRJoS4= go.etcd.io/etcd/server/v3 v3.5.0/go.mod h1:3Ah5ruV+M+7RZr0+Y/5mNLwC+eQlni+mQmOVdCRJoS4=
go.lsp.dev/jsonrpc2 v0.10.0 h1:Pr/YcXJoEOTMc/b6OTmcR1DPJ3mSWl/SWiU1Cct6VmI=
go.lsp.dev/jsonrpc2 v0.10.0/go.mod h1:fmEzIdXPi/rf6d4uFcayi8HpFP1nBF99ERP1htC72Ac=
go.lsp.dev/pkg v0.0.0-20210717090340-384b27a52fb2 h1:hCzQgh6UcwbKgNSRurYWSqh8MufqRRPODRBblutn4TE=
go.lsp.dev/pkg v0.0.0-20210717090340-384b27a52fb2/go.mod h1:gtSHRuYfbCT0qnbLnovpie/WEmqyJ7T4n6VXiFMBtcw=
go.lsp.dev/protocol v0.12.0 h1:tNprUI9klQW5FAFVM4Sa+AbPFuVQByWhP1ttNUAjIWg=
go.lsp.dev/protocol v0.12.0/go.mod h1:Qb11/HgZQ72qQbeyPfJbu3hZBH23s1sr4st8czGeDMQ=
go.lsp.dev/uri v0.3.0 h1:KcZJmh6nFIBeJzTugn5JTU6OOyG0lDOo3R9KwTxTYbo=
go.lsp.dev/uri v0.3.0/go.mod h1:P5sbO1IQR+qySTWOCnhnK7phBx+W3zbLqSMDJNTw88I=
go.mongodb.org/mongo-driver v1.1.0/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM= go.mongodb.org/mongo-driver v1.1.0/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM=
go.mongodb.org/mongo-driver v1.7.0/go.mod h1:Q4oFMbo1+MSNqICAdYMlC/zSTrwCogR4R8NzkI+yfU8= go.mongodb.org/mongo-driver v1.7.0/go.mod h1:Q4oFMbo1+MSNqICAdYMlC/zSTrwCogR4R8NzkI+yfU8=
go.mozilla.org/pkcs7 v0.0.0-20200128120323-432b2356ecb1/go.mod h1:SNgMg+EgDFwmvSmLRTNKC5fegJjB7v23qTQ0XLGUNHk= go.mozilla.org/pkcs7 v0.0.0-20200128120323-432b2356ecb1/go.mod h1:SNgMg+EgDFwmvSmLRTNKC5fegJjB7v23qTQ0XLGUNHk=
@ -2677,6 +2689,7 @@ golang.org/x/sys v0.0.0-20210908233432-aa78b53d3365/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211023085530-d6a326fbbf70/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20211023085530-d6a326fbbf70/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211025201205-69cdffdb9359/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20211025201205-69cdffdb9359/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211110154304-99a53858aa08/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211116061358-0a5406a5449c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20211116061358-0a5406a5449c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211124211545-fe61309f8881/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20211124211545-fe61309f8881/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211205182925-97ca703d548d/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20211205182925-97ca703d548d/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=

View File

@ -4,6 +4,7 @@ import (
"context" "context"
"sync" "sync"
status_common "github.com/status-im/status-go/common"
"github.com/status-im/status-go/healthmanager/aggregator" "github.com/status-im/status-go/healthmanager/aggregator"
"github.com/status-im/status-go/healthmanager/rpcstatus" "github.com/status-im/status-go/healthmanager/rpcstatus"
) )
@ -72,6 +73,7 @@ func (b *BlockchainHealthManager) RegisterProvidersHealthManager(ctx context.Con
statusCh := phm.Subscribe() statusCh := phm.Subscribe()
b.wg.Add(1) b.wg.Add(1)
go func(phm *ProvidersHealthManager, statusCh chan struct{}, providerCtx context.Context) { go func(phm *ProvidersHealthManager, statusCh chan struct{}, providerCtx context.Context) {
defer status_common.LogOnPanic()
defer func() { defer func() {
phm.Unsubscribe(statusCh) phm.Unsubscribe(statusCh)
b.wg.Done() b.wg.Done()

View File

@ -15,9 +15,10 @@ import (
"time" "time"
"unicode/utf8" "unicode/utf8"
"go.uber.org/zap"
"golang.org/x/image/webp" "golang.org/x/image/webp"
"github.com/ethereum/go-ethereum/log" "github.com/status-im/status-go/logutils"
) )
var ( var (
@ -66,7 +67,7 @@ func DecodeFromURL(path string) (image.Image, error) {
defer func() { defer func() {
if err := res.Body.Close(); err != nil { if err := res.Body.Close(); err != nil {
log.Error("failed to close profile pic http request body", "err", err) logutils.ZapLogger().Error("failed to close profile pic http request body", zap.Error(err))
} }
}() }()

View File

@ -15,7 +15,7 @@ import (
"go.uber.org/zap" "go.uber.org/zap"
xdraw "golang.org/x/image/draw" xdraw "golang.org/x/image/draw"
"github.com/ethereum/go-ethereum/log" "github.com/status-im/status-go/logutils"
) )
type Circle struct { type Circle struct {
@ -48,7 +48,10 @@ func Resize(size ResizeDimension, img image.Image) image.Image {
width, height = uint(size), 0 width, height = uint(size), 0
} }
log.Info("resizing", "size", size, "width", width, "height", height) logutils.ZapLogger().Info("resizing",
zap.Uint("size", uint(size)),
zap.Uint("width", width),
zap.Uint("height", height))
return resize.Resize(width, height, img, resize.Bilinear) return resize.Resize(width, height, img, resize.Bilinear)
} }
@ -264,14 +267,14 @@ func SuperimposeLogoOnQRImage(imageBytes []byte, qrFilepath []byte) []byte {
img1, _, err := image.Decode(bytes.NewReader(imageBytes)) img1, _, err := image.Decode(bytes.NewReader(imageBytes))
if err != nil { if err != nil {
log.Error("error decoding logo Image", zap.Error(err)) logutils.ZapLogger().Error("error decoding logo Image", zap.Error(err))
return nil return nil
} }
img2, _, err := image.Decode(bytes.NewReader(qrFilepath)) img2, _, err := image.Decode(bytes.NewReader(qrFilepath))
if err != nil { if err != nil {
log.Error("error decoding QR Image", zap.Error(err)) logutils.ZapLogger().Error("error decoding QR Image", zap.Error(err))
return nil return nil
} }
// Create a new image with the dimensions of the first image // Create a new image with the dimensions of the first image
@ -290,7 +293,7 @@ func SuperimposeLogoOnQRImage(imageBytes []byte, qrFilepath []byte) []byte {
err = png.Encode(&b, result) err = png.Encode(&b, result)
if err != nil { if err != nil {
log.Error("error encoding final result Image to Buffer", zap.Error(err)) logutils.ZapLogger().Error("error encoding final result Image to Buffer", zap.Error(err))
return nil return nil
} }

View File

@ -12,10 +12,11 @@ import (
"github.com/ipfs/go-cid" "github.com/ipfs/go-cid"
"github.com/wealdtech/go-multicodec" "github.com/wealdtech/go-multicodec"
"go.uber.org/zap"
"github.com/ethereum/go-ethereum/common/hexutil" "github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/log"
"github.com/status-im/status-go/common" "github.com/status-im/status-go/common"
"github.com/status-im/status-go/logutils"
"github.com/status-im/status-go/params" "github.com/status-im/status-go/params"
) )
@ -214,12 +215,12 @@ func (d *Downloader) download(cid string, download bool) ([]byte, error) {
defer func() { defer func() {
if err := resp.Body.Close(); err != nil { if err := resp.Body.Close(); err != nil {
log.Error("failed to close the stickerpack request body", "err", err) logutils.ZapLogger().Error("failed to close the stickerpack request body", zap.Error(err))
} }
}() }()
if resp.StatusCode < 200 || resp.StatusCode > 299 { if resp.StatusCode < 200 || resp.StatusCode > 299 {
log.Error("could not load data for", "cid", cid, "code", resp.StatusCode) logutils.ZapLogger().Error("could not load data for", zap.String("cid", cid), zap.Int("code", resp.StatusCode))
return nil, errors.New("could not load ipfs data") return nil, errors.New("could not load ipfs data")
} }

View File

@ -2,6 +2,7 @@ package logutils
import ( import (
"fmt" "fmt"
"time"
"go.uber.org/zap" "go.uber.org/zap"
) )
@ -13,3 +14,11 @@ func WakuMessageTimestamp(key string, value *int64) zap.Field {
} }
return zap.String(key, valueStr) return zap.String(key, valueStr)
} }
func UnixTimeMs(key string, t time.Time) zap.Field {
return zap.String(key, fmt.Sprintf("%d", t.UnixMilli()))
}
func UnixTimeNano(key string, t time.Time) zap.Field {
return zap.String(key, fmt.Sprintf("%d", t.UnixNano()))
}

View File

@ -1,6 +1,7 @@
package logutils package logutils
import ( import (
"go.uber.org/zap/zapcore"
"gopkg.in/natefinch/lumberjack.v2" "gopkg.in/natefinch/lumberjack.v2"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
@ -28,3 +29,13 @@ func FileHandlerWithRotation(opts FileOptions, format log.Format) log.Handler {
} }
return log.StreamHandler(logger, format) return log.StreamHandler(logger, format)
} }
// ZapSyncerWithRotation creates a zapcore.WriteSyncer with a configured rotation
func ZapSyncerWithRotation(opts FileOptions) zapcore.WriteSyncer {
return zapcore.AddSync(&lumberjack.Logger{
Filename: opts.Filename,
MaxSize: opts.MaxSize,
MaxBackups: opts.MaxBackups,
Compress: opts.Compress,
})
}

View File

@ -2,58 +2,49 @@ package requestlog
import ( import (
"errors" "errors"
"sync/atomic"
"github.com/ethereum/go-ethereum/log" "go.uber.org/zap"
"go.uber.org/zap/zapcore"
"github.com/status-im/status-go/logutils" "github.com/status-im/status-go/logutils"
"github.com/status-im/status-go/protocol/zaputil"
) )
var ( var (
// requestLogger is the request logger object requestLogger *zap.Logger
requestLogger log.Logger
// isRequestLoggingEnabled controls whether request logging is enabled
isRequestLoggingEnabled atomic.Bool
) )
// NewRequestLogger creates a new request logger object
func NewRequestLogger(ctx ...interface{}) log.Logger {
requestLogger = log.New(ctx...)
return requestLogger
}
// EnableRequestLogging enables or disables RPC logging
func EnableRequestLogging(enable bool) {
if enable {
isRequestLoggingEnabled.Store(true)
} else {
isRequestLoggingEnabled.Store(false)
}
}
// IsRequestLoggingEnabled returns whether RPC logging is enabled
func IsRequestLoggingEnabled() bool {
return isRequestLoggingEnabled.Load()
}
// GetRequestLogger returns the RPC logger object // GetRequestLogger returns the RPC logger object
func GetRequestLogger() log.Logger { func GetRequestLogger() *zap.Logger {
return requestLogger return requestLogger
} }
func ConfigureAndEnableRequestLogging(file string) error { func CreateRequestLogger(file string) (*zap.Logger, error) {
log.Info("initialising request logger", "log file", file) if len(file) == 0 {
requestLogger := NewRequestLogger() return nil, errors.New("file is required")
if file == "" {
return errors.New("log file path is required")
} }
fileOpts := logutils.FileOptions{ fileOpts := logutils.FileOptions{
Filename: file, Filename: file,
MaxBackups: 1, MaxBackups: 1,
} }
handler := logutils.FileHandlerWithRotation(fileOpts, log.LogfmtFormat())
filteredHandler := log.LvlFilterHandler(log.LvlDebug, handler) core := zapcore.NewCore(
requestLogger.SetHandler(filteredHandler) zaputil.NewConsoleHexEncoder(zap.NewDevelopmentEncoderConfig()),
EnableRequestLogging(true) zapcore.AddSync(logutils.ZapSyncerWithRotation(fileOpts)),
zap.DebugLevel,
)
return zap.New(core).Named("RequestLogger"), nil
}
func ConfigureAndEnableRequestLogging(file string) error {
logger, err := CreateRequestLogger(file)
if err != nil {
return err
}
requestLogger = logger
return nil return nil
} }

View File

@ -4,8 +4,10 @@ import (
"sync" "sync"
"time" "time"
"github.com/ethereum/go-ethereum/log" "go.uber.org/zap"
"github.com/status-im/status-go/common" "github.com/status-im/status-go/common"
"github.com/status-im/status-go/logutils"
) )
const ( const (
@ -38,7 +40,7 @@ func newDBCleaner(db DB, retention time.Duration) *dbCleaner {
// Start starts a loop that cleans up old messages. // Start starts a loop that cleans up old messages.
func (c *dbCleaner) Start() { func (c *dbCleaner) Start() {
log.Info("Starting cleaning envelopes", "period", c.period, "retention", c.retention) logutils.ZapLogger().Info("Starting cleaning envelopes", zap.Duration("period", c.period), zap.Duration("retention", c.retention))
cancel := make(chan struct{}) cancel := make(chan struct{})
@ -71,9 +73,9 @@ func (c *dbCleaner) schedule(period time.Duration, cancel <-chan struct{}) {
case <-t.C: case <-t.C:
count, err := c.PruneEntriesOlderThan(time.Now().Add(-c.retention)) count, err := c.PruneEntriesOlderThan(time.Now().Add(-c.retention))
if err != nil { if err != nil {
log.Error("failed to prune data", "err", err) logutils.ZapLogger().Error("failed to prune data", zap.Error(err))
} }
log.Info("Prunned some some messages successfully", "count", count) logutils.ZapLogger().Info("Prunned some some messages successfully", zap.Int("count", count))
case <-cancel: case <-cancel:
return return
} }

View File

@ -26,14 +26,15 @@ import (
"time" "time"
prom "github.com/prometheus/client_golang/prometheus" prom "github.com/prometheus/client_golang/prometheus"
"go.uber.org/zap"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/rlp" "github.com/ethereum/go-ethereum/rlp"
gocommon "github.com/status-im/status-go/common" gocommon "github.com/status-im/status-go/common"
gethbridge "github.com/status-im/status-go/eth-node/bridge/geth" gethbridge "github.com/status-im/status-go/eth-node/bridge/geth"
"github.com/status-im/status-go/eth-node/crypto" "github.com/status-im/status-go/eth-node/crypto"
"github.com/status-im/status-go/eth-node/types" "github.com/status-im/status-go/eth-node/types"
"github.com/status-im/status-go/logutils"
"github.com/status-im/status-go/params" "github.com/status-im/status-go/params"
"github.com/status-im/status-go/waku" "github.com/status-im/status-go/waku"
wakucommon "github.com/status-im/status-go/waku/common" wakucommon "github.com/status-im/status-go/waku/common"
@ -144,11 +145,11 @@ func (s *WakuMailServer) DeliverMail(peerID []byte, req *wakucommon.Envelope) {
payload, err := s.decodeRequest(peerID, req) payload, err := s.decodeRequest(peerID, req)
if err != nil { if err != nil {
deliveryFailuresCounter.WithLabelValues("validation").Inc() deliveryFailuresCounter.WithLabelValues("validation").Inc()
log.Error( logutils.ZapLogger().Error(
"[mailserver:DeliverMail] request failed validaton", "[mailserver:DeliverMail] request failed validaton",
"peerID", types.BytesToHash(peerID), zap.Stringer("peerID", types.BytesToHash(peerID)),
"requestID", req.Hash().String(), zap.Stringer("requestID", req.Hash()),
"err", err, zap.Error(err),
) )
s.ms.sendHistoricMessageErrorResponse(types.BytesToHash(peerID), types.Hash(req.Hash()), err) s.ms.sendHistoricMessageErrorResponse(types.BytesToHash(peerID), types.Hash(req.Hash()), err)
return return
@ -277,12 +278,12 @@ func (s *WakuMailServer) decodeRequest(peerID []byte, request *wakucommon.Envelo
decrypted := s.openEnvelope(request) decrypted := s.openEnvelope(request)
if decrypted == nil { if decrypted == nil {
log.Warn("Failed to decrypt p2p request") logutils.ZapLogger().Warn("Failed to decrypt p2p request")
return payload, errors.New("failed to decrypt p2p request") return payload, errors.New("failed to decrypt p2p request")
} }
if err := checkMsgSignature(decrypted.Src, peerID); err != nil { if err := checkMsgSignature(decrypted.Src, peerID); err != nil {
log.Warn("Check message signature failed", "err", err.Error()) logutils.ZapLogger().Warn("Check message signature failed", zap.Error(err))
return payload, fmt.Errorf("check message signature failed: %v", err) return payload, fmt.Errorf("check message signature failed: %v", err)
} }
@ -295,7 +296,7 @@ func (s *WakuMailServer) decodeRequest(peerID []byte, request *wakucommon.Envelo
} }
if payload.Upper < payload.Lower { if payload.Upper < payload.Lower {
log.Error("Query range is invalid: lower > upper", "lower", payload.Lower, "upper", payload.Upper) logutils.ZapLogger().Error("Query range is invalid: lower > upper", zap.Uint32("lower", payload.Lower), zap.Uint32("upper", payload.Upper))
return payload, errors.New("query range is invalid: lower > upper") return payload, errors.New("query range is invalid: lower > upper")
} }
@ -400,13 +401,13 @@ func newMailServer(cfg Config, adapter adapter, service service) (*mailServer, e
// Open database in the last step in order not to init with error // Open database in the last step in order not to init with error
// and leave the database open by accident. // and leave the database open by accident.
if cfg.PostgresEnabled { if cfg.PostgresEnabled {
log.Info("Connecting to postgres database") logutils.ZapLogger().Info("Connecting to postgres database")
database, err := NewPostgresDB(cfg.PostgresURI) database, err := NewPostgresDB(cfg.PostgresURI)
if err != nil { if err != nil {
return nil, fmt.Errorf("open DB: %s", err) return nil, fmt.Errorf("open DB: %s", err)
} }
s.db = database s.db = database
log.Info("Connected to postgres database") logutils.ZapLogger().Info("Connected to postgres database")
} else { } else {
// Defaults to LevelDB // Defaults to LevelDB
database, err := NewLevelDB(cfg.DataDir) database, err := NewLevelDB(cfg.DataDir)
@ -439,7 +440,7 @@ func (s *mailServer) setupCleaner(retention time.Duration) {
func (s *mailServer) Archive(env types.Envelope) { func (s *mailServer) Archive(env types.Envelope) {
err := s.db.SaveEnvelope(env) err := s.db.SaveEnvelope(env)
if err != nil { if err != nil {
log.Error("Could not save envelope", "hash", env.Hash().String()) logutils.ZapLogger().Error("Could not save envelope", zap.Stringer("hash", env.Hash()))
} }
} }
@ -448,34 +449,34 @@ func (s *mailServer) DeliverMail(peerID, reqID types.Hash, req MessagesRequestPa
defer timer.ObserveDuration() defer timer.ObserveDuration()
deliveryAttemptsCounter.Inc() deliveryAttemptsCounter.Inc()
log.Info( logutils.ZapLogger().Info(
"[mailserver:DeliverMail] delivering mail", "[mailserver:DeliverMail] delivering mail",
"peerID", peerID.String(), zap.Stringer("peerID", peerID),
"requestID", reqID.String(), zap.Stringer("requestID", reqID),
) )
req.SetDefaults() req.SetDefaults()
log.Info( logutils.ZapLogger().Info(
"[mailserver:DeliverMail] processing request", "[mailserver:DeliverMail] processing request",
"peerID", peerID.String(), zap.Stringer("peerID", peerID),
"requestID", reqID.String(), zap.Stringer("requestID", reqID),
"lower", req.Lower, zap.Uint32("lower", req.Lower),
"upper", req.Upper, zap.Uint32("upper", req.Upper),
"bloom", req.Bloom, zap.Binary("bloom", req.Bloom),
"topics", req.Topics, zap.Any("topics", req.Topics),
"limit", req.Limit, zap.Uint32("limit", req.Limit),
"cursor", req.Cursor, zap.Binary("cursor", req.Cursor),
"batch", req.Batch, zap.Bool("batch", req.Batch),
) )
if err := req.Validate(); err != nil { if err := req.Validate(); err != nil {
syncFailuresCounter.WithLabelValues("req_invalid").Inc() syncFailuresCounter.WithLabelValues("req_invalid").Inc()
log.Error( logutils.ZapLogger().Error(
"[mailserver:DeliverMail] request invalid", "[mailserver:DeliverMail] request invalid",
"peerID", peerID.String(), zap.Stringer("peerID", peerID),
"requestID", reqID.String(), zap.Stringer("requestID", reqID),
"err", err, zap.Error(err),
) )
s.sendHistoricMessageErrorResponse(peerID, reqID, fmt.Errorf("request is invalid: %v", err)) s.sendHistoricMessageErrorResponse(peerID, reqID, fmt.Errorf("request is invalid: %v", err))
return return
@ -483,10 +484,10 @@ func (s *mailServer) DeliverMail(peerID, reqID types.Hash, req MessagesRequestPa
if s.exceedsPeerRequests(peerID) { if s.exceedsPeerRequests(peerID) {
deliveryFailuresCounter.WithLabelValues("peer_req_limit").Inc() deliveryFailuresCounter.WithLabelValues("peer_req_limit").Inc()
log.Error( logutils.ZapLogger().Error(
"[mailserver:DeliverMail] peer exceeded the limit", "[mailserver:DeliverMail] peer exceeded the limit",
"peerID", peerID.String(), zap.Stringer("peerID", peerID),
"requestID", reqID.String(), zap.Stringer("requestID", reqID),
) )
s.sendHistoricMessageErrorResponse(peerID, reqID, fmt.Errorf("rate limit exceeded")) s.sendHistoricMessageErrorResponse(peerID, reqID, fmt.Errorf("rate limit exceeded"))
return return
@ -498,11 +499,11 @@ func (s *mailServer) DeliverMail(peerID, reqID types.Hash, req MessagesRequestPa
iter, err := s.createIterator(req) iter, err := s.createIterator(req)
if err != nil { if err != nil {
log.Error( logutils.ZapLogger().Error(
"[mailserver:DeliverMail] request failed", "[mailserver:DeliverMail] request failed",
"peerID", peerID.String(), zap.Stringer("peerID", peerID),
"requestID", reqID.String(), zap.Stringer("requestID", reqID),
"err", err, zap.Error(err),
) )
return return
} }
@ -524,11 +525,11 @@ func (s *mailServer) DeliverMail(peerID, reqID types.Hash, req MessagesRequestPa
counter++ counter++
} }
close(errCh) close(errCh)
log.Info( logutils.ZapLogger().Info(
"[mailserver:DeliverMail] finished sending bundles", "[mailserver:DeliverMail] finished sending bundles",
"peerID", peerID, zap.Stringer("peerID", peerID),
"requestID", reqID.String(), zap.Stringer("requestID", reqID),
"counter", counter, zap.Int("counter", counter),
) )
}() }()
@ -546,11 +547,11 @@ func (s *mailServer) DeliverMail(peerID, reqID types.Hash, req MessagesRequestPa
// Wait for the goroutine to finish the work. It may return an error. // Wait for the goroutine to finish the work. It may return an error.
if err := <-errCh; err != nil { if err := <-errCh; err != nil {
deliveryFailuresCounter.WithLabelValues("process").Inc() deliveryFailuresCounter.WithLabelValues("process").Inc()
log.Error( logutils.ZapLogger().Error(
"[mailserver:DeliverMail] error while processing", "[mailserver:DeliverMail] error while processing",
"err", err, zap.Stringer("peerID", peerID),
"peerID", peerID, zap.Stringer("requestID", reqID),
"requestID", reqID, zap.Error(err),
) )
s.sendHistoricMessageErrorResponse(peerID, reqID, err) s.sendHistoricMessageErrorResponse(peerID, reqID, err)
return return
@ -559,29 +560,29 @@ func (s *mailServer) DeliverMail(peerID, reqID types.Hash, req MessagesRequestPa
// Processing of the request could be finished earlier due to iterator error. // Processing of the request could be finished earlier due to iterator error.
if err := iter.Error(); err != nil { if err := iter.Error(); err != nil {
deliveryFailuresCounter.WithLabelValues("iterator").Inc() deliveryFailuresCounter.WithLabelValues("iterator").Inc()
log.Error( logutils.ZapLogger().Error(
"[mailserver:DeliverMail] iterator failed", "[mailserver:DeliverMail] iterator failed",
"err", err, zap.Stringer("peerID", peerID),
"peerID", peerID, zap.Stringer("requestID", reqID),
"requestID", reqID, zap.Error(err),
) )
s.sendHistoricMessageErrorResponse(peerID, reqID, err) s.sendHistoricMessageErrorResponse(peerID, reqID, err)
return return
} }
log.Info( logutils.ZapLogger().Info(
"[mailserver:DeliverMail] sending historic message response", "[mailserver:DeliverMail] sending historic message response",
"peerID", peerID, zap.Stringer("peerID", peerID),
"requestID", reqID, zap.Stringer("requestID", reqID),
"last", lastEnvelopeHash, zap.Stringer("last", lastEnvelopeHash),
"next", nextPageCursor, zap.Binary("next", nextPageCursor),
) )
s.sendHistoricMessageResponse(peerID, reqID, lastEnvelopeHash, nextPageCursor) s.sendHistoricMessageResponse(peerID, reqID, lastEnvelopeHash, nextPageCursor)
} }
func (s *mailServer) SyncMail(peerID types.Hash, req MessagesRequestPayload) error { func (s *mailServer) SyncMail(peerID types.Hash, req MessagesRequestPayload) error {
log.Info("Started syncing envelopes", "peer", peerID.String(), "req", req) logutils.ZapLogger().Info("Started syncing envelopes", zap.Stringer("peer", peerID), zap.Any("req", req))
requestID := fmt.Sprintf("%d-%d", time.Now().UnixNano(), rand.Intn(1000)) // nolint: gosec requestID := fmt.Sprintf("%d-%d", time.Now().UnixNano(), rand.Intn(1000)) // nolint: gosec
@ -590,7 +591,7 @@ func (s *mailServer) SyncMail(peerID types.Hash, req MessagesRequestPayload) err
// Check rate limiting for a requesting peer. // Check rate limiting for a requesting peer.
if s.exceedsPeerRequests(peerID) { if s.exceedsPeerRequests(peerID) {
syncFailuresCounter.WithLabelValues("req_per_sec_limit").Inc() syncFailuresCounter.WithLabelValues("req_per_sec_limit").Inc()
log.Error("Peer exceeded request per seconds limit", "peerID", peerID.String()) logutils.ZapLogger().Error("Peer exceeded request per seconds limit", zap.Stringer("peerID", peerID))
return fmt.Errorf("requests per seconds limit exceeded") return fmt.Errorf("requests per seconds limit exceeded")
} }
@ -656,7 +657,7 @@ func (s *mailServer) SyncMail(peerID types.Hash, req MessagesRequestPayload) err
return fmt.Errorf("LevelDB iterator failed: %v", err) return fmt.Errorf("LevelDB iterator failed: %v", err)
} }
log.Info("Finished syncing envelopes", "peer", peerID.String()) logutils.ZapLogger().Info("Finished syncing envelopes", zap.Stringer("peer", peerID))
err = s.service.SendSyncResponse( err = s.service.SendSyncResponse(
peerID.Bytes(), peerID.Bytes(),
@ -674,7 +675,7 @@ func (s *mailServer) SyncMail(peerID types.Hash, req MessagesRequestPayload) err
func (s *mailServer) Close() { func (s *mailServer) Close() {
if s.db != nil { if s.db != nil {
if err := s.db.Close(); err != nil { if err := s.db.Close(); err != nil {
log.Error("closing database failed", "err", err) logutils.ZapLogger().Error("closing database failed", zap.Error(err))
} }
} }
if s.rateLimiter != nil { if s.rateLimiter != nil {
@ -698,7 +699,7 @@ func (s *mailServer) exceedsPeerRequests(peerID types.Hash) bool {
return false return false
} }
log.Info("peerID exceeded the number of requests per second", "peerID", peerID.String()) logutils.ZapLogger().Info("peerID exceeded the number of requests per second", zap.Stringer("peerID", peerID))
return true return true
} }
@ -746,10 +747,10 @@ func (s *mailServer) processRequestInBundles(
lastEnvelopeHash types.Hash lastEnvelopeHash types.Hash
) )
log.Info( logutils.ZapLogger().Info(
"[mailserver:processRequestInBundles] processing request", "[mailserver:processRequestInBundles] processing request",
"requestID", requestID, zap.String("requestID", requestID),
"limit", limit, zap.Int("limit", limit),
) )
var topicsMap map[types.TopicType]bool var topicsMap map[types.TopicType]bool
@ -779,10 +780,10 @@ func (s *mailServer) processRequestInBundles(
err = errors.New("either topics or bloom must be specified") err = errors.New("either topics or bloom must be specified")
} }
if err != nil { if err != nil {
log.Error( logutils.ZapLogger().Error(
"[mailserver:processRequestInBundles]Failed to get envelope from iterator", "[mailserver:processRequestInBundles]Failed to get envelope from iterator",
"err", err, zap.String("requestID", requestID),
"requestID", requestID, zap.Error(err),
) )
continue continue
} }
@ -793,9 +794,10 @@ func (s *mailServer) processRequestInBundles(
key, err := iter.DBKey() key, err := iter.DBKey()
if err != nil { if err != nil {
log.Error( logutils.ZapLogger().Error(
"[mailserver:processRequestInBundles] failed getting key", "[mailserver:processRequestInBundles] failed getting key",
"requestID", requestID, zap.String("requestID", requestID),
zap.Error(err),
) )
break break
@ -839,13 +841,13 @@ func (s *mailServer) processRequestInBundles(
processedEnvelopesSize += int64(bundleSize) processedEnvelopesSize += int64(bundleSize)
} }
log.Info( logutils.ZapLogger().Info(
"[mailserver:processRequestInBundles] publishing envelopes", "[mailserver:processRequestInBundles] publishing envelopes",
"requestID", requestID, zap.String("requestID", requestID),
"batchesCount", len(batches), zap.Int("batchesCount", len(batches)),
"envelopeCount", processedEnvelopes, zap.Int("envelopeCount", processedEnvelopes),
"processedEnvelopesSize", processedEnvelopesSize, zap.Int64("processedEnvelopesSize", processedEnvelopesSize),
"cursor", nextCursor, zap.Binary("cursor", nextCursor),
) )
// Publish // Publish
@ -858,15 +860,15 @@ batchLoop:
// the consumer of `output` channel exits prematurely. // the consumer of `output` channel exits prematurely.
// In such a case, we should stop pushing batches and exit. // In such a case, we should stop pushing batches and exit.
case <-cancel: case <-cancel:
log.Info( logutils.ZapLogger().Info(
"[mailserver:processRequestInBundles] failed to push all batches", "[mailserver:processRequestInBundles] failed to push all batches",
"requestID", requestID, zap.String("requestID", requestID),
) )
break batchLoop break batchLoop
case <-time.After(timeout): case <-time.After(timeout):
log.Error( logutils.ZapLogger().Error(
"[mailserver:processRequestInBundles] timed out pushing a batch", "[mailserver:processRequestInBundles] timed out pushing a batch",
"requestID", requestID, zap.String("requestID", requestID),
) )
break batchLoop break batchLoop
} }
@ -875,9 +877,9 @@ batchLoop:
envelopesCounter.Inc() envelopesCounter.Inc()
sentEnvelopeBatchSizeMeter.Observe(float64(processedEnvelopesSize)) sentEnvelopeBatchSizeMeter.Observe(float64(processedEnvelopesSize))
log.Info( logutils.ZapLogger().Info(
"[mailserver:processRequestInBundles] envelopes published", "[mailserver:processRequestInBundles] envelopes published",
"requestID", requestID, zap.String("requestID", requestID),
) )
close(output) close(output)
@ -906,11 +908,11 @@ func (s *mailServer) sendHistoricMessageResponse(peerID, reqID, lastEnvelopeHash
err := s.service.SendHistoricMessageResponse(peerID.Bytes(), payload) err := s.service.SendHistoricMessageResponse(peerID.Bytes(), payload)
if err != nil { if err != nil {
deliveryFailuresCounter.WithLabelValues("historic_msg_resp").Inc() deliveryFailuresCounter.WithLabelValues("historic_msg_resp").Inc()
log.Error( logutils.ZapLogger().Error(
"[mailserver:DeliverMail] error sending historic message response", "[mailserver:DeliverMail] error sending historic message response",
"err", err, zap.Stringer("peerID", peerID),
"peerID", peerID, zap.Stringer("requestID", reqID),
"requestID", reqID, zap.Error(err),
) )
} }
} }
@ -921,7 +923,7 @@ func (s *mailServer) sendHistoricMessageErrorResponse(peerID, reqID types.Hash,
// if we can't report an error, probably something is wrong with p2p connection, // if we can't report an error, probably something is wrong with p2p connection,
// so we just print a log entry to document this sad fact // so we just print a log entry to document this sad fact
if err != nil { if err != nil {
log.Error("Error while reporting error response", "err", err, "peerID", peerID.String()) logutils.ZapLogger().Error("Error while reporting error response", zap.Stringer("peerID", peerID), zap.Error(err))
} }
} }

View File

@ -1,19 +1,19 @@
package mailserver package mailserver
import ( import (
"fmt"
"time" "time"
"github.com/syndtr/goleveldb/leveldb" "github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/errors" "github.com/syndtr/goleveldb/leveldb/errors"
"github.com/syndtr/goleveldb/leveldb/iterator" "github.com/syndtr/goleveldb/leveldb/iterator"
"github.com/syndtr/goleveldb/leveldb/util" "github.com/syndtr/goleveldb/leveldb/util"
"go.uber.org/zap"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/rlp" "github.com/ethereum/go-ethereum/rlp"
"github.com/status-im/status-go/common" "github.com/status-im/status-go/common"
"github.com/status-im/status-go/eth-node/types" "github.com/status-im/status-go/eth-node/types"
"github.com/status-im/status-go/logutils"
waku "github.com/status-im/status-go/waku/common" waku "github.com/status-im/status-go/waku/common"
) )
@ -84,7 +84,7 @@ func NewLevelDB(dataDir string) (*LevelDB, error) {
// Open opens an existing leveldb database // Open opens an existing leveldb database
db, err := leveldb.OpenFile(dataDir, nil) db, err := leveldb.OpenFile(dataDir, nil)
if _, corrupted := err.(*errors.ErrCorrupted); corrupted { if _, corrupted := err.(*errors.ErrCorrupted); corrupted {
log.Info("database is corrupted trying to recover", "path", dataDir) logutils.ZapLogger().Info("database is corrupted trying to recover", zap.String("path", dataDir))
db, err = leveldb.RecoverFile(dataDir, nil) db, err = leveldb.RecoverFile(dataDir, nil)
} }
@ -119,7 +119,7 @@ func (db *LevelDB) GetEnvelope(key *DBKey) ([]byte, error) {
func (db *LevelDB) updateArchivedEnvelopesCount() { func (db *LevelDB) updateArchivedEnvelopesCount() {
if count, err := db.envelopesCount(); err != nil { if count, err := db.envelopesCount(); err != nil {
log.Warn("db query for envelopes count failed", "err", err) logutils.ZapLogger().Warn("db query for envelopes count failed", zap.Error(err))
} else { } else {
archivedEnvelopesGauge.WithLabelValues(db.name).Set(float64(count)) archivedEnvelopesGauge.WithLabelValues(db.name).Set(float64(count))
} }
@ -210,13 +210,13 @@ func (db *LevelDB) SaveEnvelope(env types.Envelope) error {
key := NewDBKey(env.Expiry()-env.TTL(), env.Topic(), env.Hash()) key := NewDBKey(env.Expiry()-env.TTL(), env.Topic(), env.Hash())
rawEnvelope, err := rlp.EncodeToBytes(env.Unwrap()) rawEnvelope, err := rlp.EncodeToBytes(env.Unwrap())
if err != nil { if err != nil {
log.Error(fmt.Sprintf("rlp.EncodeToBytes failed: %s", err)) logutils.ZapLogger().Error("rlp.EncodeToBytes failed", zap.Error(err))
archivedErrorsCounter.WithLabelValues(db.name).Inc() archivedErrorsCounter.WithLabelValues(db.name).Inc()
return err return err
} }
if err = db.ldb.Put(key.Bytes(), rawEnvelope, nil); err != nil { if err = db.ldb.Put(key.Bytes(), rawEnvelope, nil); err != nil {
log.Error(fmt.Sprintf("Writing to DB failed: %s", err)) logutils.ZapLogger().Error("writing to DB failed", zap.Error(err))
archivedErrorsCounter.WithLabelValues(db.name).Inc() archivedErrorsCounter.WithLabelValues(db.name).Inc()
} }
archivedEnvelopesGauge.WithLabelValues(db.name).Inc() archivedEnvelopesGauge.WithLabelValues(db.name).Inc()
@ -238,7 +238,9 @@ func recoverLevelDBPanics(calleMethodName string) {
// Recover from possible goleveldb panics // Recover from possible goleveldb panics
if r := recover(); r != nil { if r := recover(); r != nil {
if errString, ok := r.(string); ok { if errString, ok := r.(string); ok {
log.Error(fmt.Sprintf("recovered from panic in %s: %s", calleMethodName, errString)) logutils.ZapLogger().Error("recovered from panic",
zap.String("calleMethodName", calleMethodName),
zap.String("errString", errString))
} }
} }
} }

View File

@ -7,6 +7,7 @@ import (
"time" "time"
"github.com/lib/pq" "github.com/lib/pq"
"go.uber.org/zap"
// Import postgres driver // Import postgres driver
_ "github.com/lib/pq" _ "github.com/lib/pq"
@ -15,9 +16,9 @@ import (
bindata "github.com/status-im/migrate/v4/source/go_bindata" bindata "github.com/status-im/migrate/v4/source/go_bindata"
"github.com/status-im/status-go/common" "github.com/status-im/status-go/common"
"github.com/status-im/status-go/logutils"
"github.com/status-im/status-go/mailserver/migrations" "github.com/status-im/status-go/mailserver/migrations"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/rlp" "github.com/ethereum/go-ethereum/rlp"
"github.com/status-im/status-go/eth-node/types" "github.com/status-im/status-go/eth-node/types"
@ -84,7 +85,7 @@ func (i *PostgresDB) envelopesCount() (int, error) {
func (i *PostgresDB) updateArchivedEnvelopesCount() { func (i *PostgresDB) updateArchivedEnvelopesCount() {
if count, err := i.envelopesCount(); err != nil { if count, err := i.envelopesCount(); err != nil {
log.Warn("db query for envelopes count failed", "err", err) logutils.ZapLogger().Warn("db query for envelopes count failed", zap.Error(err))
} else { } else {
archivedEnvelopesGauge.WithLabelValues(i.name).Set(float64(count)) archivedEnvelopesGauge.WithLabelValues(i.name).Set(float64(count))
} }
@ -262,7 +263,7 @@ func (i *PostgresDB) SaveEnvelope(env types.Envelope) error {
key := NewDBKey(env.Expiry()-env.TTL(), topic, env.Hash()) key := NewDBKey(env.Expiry()-env.TTL(), topic, env.Hash())
rawEnvelope, err := rlp.EncodeToBytes(env.Unwrap()) rawEnvelope, err := rlp.EncodeToBytes(env.Unwrap())
if err != nil { if err != nil {
log.Error(fmt.Sprintf("rlp.EncodeToBytes failed: %s", err)) logutils.ZapLogger().Error("rlp.EncodeToBytes failed", zap.Error(err))
archivedErrorsCounter.WithLabelValues(i.name).Inc() archivedErrorsCounter.WithLabelValues(i.name).Inc()
return err return err
} }

View File

@ -5,12 +5,16 @@ import (
"net/http" "net/http"
"time" "time"
"github.com/ethereum/go-ethereum/log" "go.uber.org/zap"
"github.com/ethereum/go-ethereum/metrics" "github.com/ethereum/go-ethereum/metrics"
gethprom "github.com/ethereum/go-ethereum/metrics/prometheus" gethprom "github.com/ethereum/go-ethereum/metrics/prometheus"
"github.com/status-im/status-go/logutils"
prom "github.com/prometheus/client_golang/prometheus" prom "github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp" "github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/status-im/status-go/common"
) )
// Server runs and controls a HTTP pprof interface. // Server runs and controls a HTTP pprof interface.
@ -36,7 +40,7 @@ func healthHandler() http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, err := w.Write([]byte("OK")) _, err := w.Write([]byte("OK"))
if err != nil { if err != nil {
log.Error("health handler error", "err", err) logutils.ZapLogger().Error("health handler error", zap.Error(err))
} }
}) })
} }
@ -55,5 +59,6 @@ func Handler(reg metrics.Registry) http.Handler {
// Listen starts the HTTP server in the background. // Listen starts the HTTP server in the background.
func (p *Server) Listen() { func (p *Server) Listen() {
log.Info("metrics server stopped", "err", p.server.ListenAndServe()) defer common.LogOnPanic()
logutils.ZapLogger().Info("metrics server stopped", zap.Error(p.server.ListenAndServe()))
} }

View File

@ -4,6 +4,8 @@ import (
"errors" "errors"
"strings" "strings"
"go.uber.org/zap"
"github.com/ethereum/go-ethereum/node" "github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p" "github.com/ethereum/go-ethereum/p2p"
@ -71,7 +73,7 @@ func calculatePeerCounts(server *p2p.Server) {
for _, p := range peers { for _, p := range peers {
labels, err := labelsFromNodeName(p.Fullname()) labels, err := labelsFromNodeName(p.Fullname())
if err != nil { if err != nil {
logger.Warn("failed parsing peer name", "error", err, "name", p.Name()) logger.Warn("failed parsing peer name", zap.String("name", p.Name()), zap.Error(err))
continue continue
} }
nodePeersGauge.With(labels).Inc() nodePeersGauge.With(labels).Inc()

View File

@ -4,14 +4,16 @@ import (
"context" "context"
"errors" "errors"
"github.com/ethereum/go-ethereum/log" "go.uber.org/zap"
"github.com/ethereum/go-ethereum/node" "github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p" "github.com/ethereum/go-ethereum/p2p"
"github.com/status-im/status-go/common" "github.com/status-im/status-go/common"
"github.com/status-im/status-go/logutils"
) )
// All general log messages in this package should be routed through this logger. // All general log messages in this package should be routed through this logger.
var logger = log.New("package", "status-go/metrics/node") var logger = logutils.ZapLogger().Named("metrics.node")
// SubscribeServerEvents subscribes to server and listens to // SubscribeServerEvents subscribes to server and listens to
// PeerEventTypeAdd and PeerEventTypeDrop events. // PeerEventTypeAdd and PeerEventTypeDrop events.
@ -50,13 +52,13 @@ func SubscribeServerEvents(ctx context.Context, node *node.Node) error {
go func() { go func() {
defer common.LogOnPanic() defer common.LogOnPanic()
if err := updateNodeMetrics(node, event.Type); err != nil { if err := updateNodeMetrics(node, event.Type); err != nil {
logger.Error("failed to update node metrics", "err", err) logger.Error("failed to update node metrics", zap.Error(err))
} }
}() }()
} }
case err := <-subscription.Err(): case err := <-subscription.Err():
if err != nil { if err != nil {
logger.Error("Subscription failed", "err", err) logger.Error("Subscription failed", zap.Error(err))
} }
return err return err
case <-ctx.Done(): case <-ctx.Done():

View File

@ -1,16 +1,16 @@
package statusgo package callog
import ( import (
"fmt" "fmt"
"reflect" "reflect"
"regexp" "regexp"
"runtime" "runtime"
"runtime/debug"
"strings" "strings"
"time" "time"
"github.com/ethereum/go-ethereum/log" "go.uber.org/zap"
"github.com/status-im/status-go/logutils/requestlog"
"github.com/status-im/status-go/logutils"
) )
var sensitiveKeys = []string{ var sensitiveKeys = []string{
@ -46,7 +46,7 @@ func getShortFunctionName(fn any) string {
return parts[len(parts)-1] return parts[len(parts)-1]
} }
// call executes the given function and logs request details if logging is enabled // Call executes the given function and logs request details if logging is enabled
// //
// Parameters: // Parameters:
// - fn: The function to be executed // - fn: The function to be executed
@ -58,21 +58,21 @@ func getShortFunctionName(fn any) string {
// Functionality: // Functionality:
// 1. Sets up panic recovery to log and re-panic // 1. Sets up panic recovery to log and re-panic
// 2. Records start time if request logging is enabled // 2. Records start time if request logging is enabled
// 3. Uses reflection to call the given function // 3. Uses reflection to Call the given function
// 4. If request logging is enabled, logs method name, parameters, response, and execution duration // 4. If request logging is enabled, logs method name, parameters, response, and execution duration
// 5. Removes sensitive information before logging // 5. Removes sensitive information before logging
func call(fn any, params ...any) any { func Call(logger *zap.Logger, fn any, params ...any) any {
defer func() { defer func() {
if r := recover(); r != nil { if r := recover(); r != nil {
// we're not sure if request logging is enabled here, so we log it use default logger logutils.ZapLogger().Error("panic found in call", zap.Any("error", r), zap.Stack("stacktrace"))
log.Error("panic found in call", "error", r, "stacktrace", string(debug.Stack()))
panic(r) panic(r)
} }
}() }()
var startTime time.Time var startTime time.Time
if requestlog.IsRequestLoggingEnabled() { requestLoggingEnabled := logger != nil
if requestLoggingEnabled {
startTime = time.Now() startTime = time.Now()
} }
@ -95,19 +95,25 @@ func call(fn any, params ...any) any {
resp = results[0].Interface() resp = results[0].Interface()
} }
if requestlog.IsRequestLoggingEnabled() { if requestLoggingEnabled {
duration := time.Since(startTime) duration := time.Since(startTime)
methodName := getShortFunctionName(fn) methodName := getShortFunctionName(fn)
paramsString := removeSensitiveInfo(fmt.Sprintf("%+v", params)) paramsString := removeSensitiveInfo(fmt.Sprintf("%+v", params))
respString := removeSensitiveInfo(fmt.Sprintf("%+v", resp)) respString := removeSensitiveInfo(fmt.Sprintf("%+v", resp))
requestlog.GetRequestLogger().Debug(methodName, "params", paramsString, "resp", respString, "duration", duration)
logger.Debug("call",
zap.String("method", methodName),
zap.String("params", paramsString),
zap.String("resp", respString),
zap.Duration("duration", duration),
)
} }
return resp return resp
} }
func callWithResponse(fn any, params ...any) string { func CallWithResponse(logger *zap.Logger, fn any, params ...any) string {
resp := call(fn, params...) resp := Call(logger, fn, params...)
if resp == nil { if resp == nil {
return "" return ""
} }

View File

@ -1,19 +1,16 @@
package statusgo package callog
import ( import (
"encoding/json"
"fmt" "fmt"
"os"
"strings" "strings"
"testing" "testing"
"github.com/ethereum/go-ethereum/log"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"github.com/status-im/status-go/logutils/requestlog" "github.com/status-im/status-go/logutils/requestlog"
"github.com/status-im/status-go/multiaccounts"
"github.com/status-im/status-go/multiaccounts/settings"
"github.com/status-im/status-go/signal"
"github.com/ethereum/go-ethereum/log"
) )
func TestRemoveSensitiveInfo(t *testing.T) { func TestRemoveSensitiveInfo(t *testing.T) {
@ -60,17 +57,14 @@ func TestRemoveSensitiveInfo(t *testing.T) {
} }
func TestCall(t *testing.T) { func TestCall(t *testing.T) {
// Enable request logging // Create a temporary file for logging
requestlog.EnableRequestLogging(true) tempLogFile, err := os.CreateTemp(t.TempDir(), "TestCall*.log")
require.NoError(t, err)
// Create a mock logger to capture log output // Enable request logging
var logOutput string logger, err := requestlog.CreateRequestLogger(tempLogFile.Name())
mockLogger := log.New() require.NoError(t, err)
mockLogger.SetHandler(log.FuncHandler(func(r *log.Record) error { require.NotNil(t, logger)
logOutput += r.Msg + fmt.Sprintf("%s", r.Ctx...)
return nil
}))
requestlog.NewRequestLogger().SetHandler(mockLogger.GetHandler())
// Test case 1: Normal execution // Test case 1: Normal execution
testFunc := func(param string) string { testFunc := func(param string) string {
@ -79,13 +73,18 @@ func TestCall(t *testing.T) {
testParam := "test input" testParam := "test input"
expectedResult := "test result: test input" expectedResult := "test result: test input"
result := callWithResponse(testFunc, testParam) result := CallWithResponse(logger, testFunc, testParam)
// Check the result // Check the result
if result != expectedResult { if result != expectedResult {
t.Errorf("Expected result %s, got %s", expectedResult, result) t.Errorf("Expected result %s, got %s", expectedResult, result)
} }
// Read the log file
logData, err := os.ReadFile(tempLogFile.Name())
require.NoError(t, err)
logOutput := string(logData)
// Check if the log contains expected information // Check if the log contains expected information
expectedLogParts := []string{getShortFunctionName(testFunc), "params", testParam, "resp", expectedResult} expectedLogParts := []string{getShortFunctionName(testFunc), "params", testParam, "resp", expectedResult}
for _, part := range expectedLogParts { for _, part := range expectedLogParts {
@ -94,19 +93,27 @@ func TestCall(t *testing.T) {
} }
} }
// Create a mock logger to capture log output
mockLogger := log.New()
mockLogger.SetHandler(log.FuncHandler(func(r *log.Record) error {
logOutput += r.Msg + fmt.Sprintf("%s", r.Ctx...)
return nil
}))
// Test case 2: Panic -> recovery -> re-panic // Test case 2: Panic -> recovery -> re-panic
oldRootHandler := log.Root().GetHandler() oldRootHandler := log.Root().GetHandler()
defer log.Root().SetHandler(oldRootHandler) defer log.Root().SetHandler(oldRootHandler)
log.Root().SetHandler(mockLogger.GetHandler()) log.Root().SetHandler(mockLogger.GetHandler())
// Clear log output for next test // Clear log output for next test
logOutput = "" logOutput = ""
e := "test panic" e := "test panic"
panicFunc := func() { panicFunc := func() {
panic(e) panic(e)
} }
require.PanicsWithValue(t, e, func() { require.PanicsWithValue(t, e, func() {
call(panicFunc) Call(logger, panicFunc)
}) })
// Check if the panic was logged // Check if the panic was logged
@ -121,35 +128,11 @@ func TestCall(t *testing.T) {
} }
} }
func initializeApplication(requestJSON string) string {
return ""
}
func TestGetFunctionName(t *testing.T) { func TestGetFunctionName(t *testing.T) {
fn := getShortFunctionName(initializeApplication) fn := getShortFunctionName(initializeApplication)
require.Equal(t, "initializeApplication", fn) require.Equal(t, "initializeApplication", fn)
} }
type testSignalHandler struct {
receivedSignal string
}
func (t *testSignalHandler) HandleSignal(data string) {
t.receivedSignal = data
}
func TestSetMobileSignalHandler(t *testing.T) {
// Setup
handler := &testSignalHandler{}
SetMobileSignalHandler(handler)
t.Cleanup(signal.ResetMobileSignalHandler)
// Test data
testAccount := &multiaccounts.Account{Name: "test"}
testSettings := &settings.Settings{KeyUID: "0x1"}
testEnsUsernames := json.RawMessage(`{"test": "test"}`)
// Action
signal.SendLoggedIn(testAccount, testSettings, testEnsUsernames, nil)
// Assertions
require.Contains(t, handler.receivedSignal, `"key-uid":"0x1"`, "Signal should contain the correct KeyUID")
require.Contains(t, handler.receivedSignal, `"name":"test"`, "Signal should contain the correct account name")
require.Contains(t, handler.receivedSignal, `"ensUsernames":{"test":"test"}`, "Signal should contain the correct ENS usernames")
}

View File

@ -22,7 +22,7 @@ func TestInitLogging(t *testing.T) {
require.Equal(t, `{"error":""}`, response) require.Equal(t, `{"error":""}`, response)
_, err := os.Stat(gethLogFile) _, err := os.Stat(gethLogFile)
require.NoError(t, err) require.NoError(t, err)
require.True(t, requestlog.IsRequestLoggingEnabled()) require.NotNil(t, requestlog.GetRequestLogger())
// requests log file should not be created yet // requests log file should not be created yet
_, err = os.Stat(requestsLogFile) _, err = os.Stat(requestsLogFile)

View File

@ -7,9 +7,9 @@ import (
"fmt" "fmt"
"unsafe" "unsafe"
"go.uber.org/zap"
validator "gopkg.in/go-playground/validator.v9" validator "gopkg.in/go-playground/validator.v9"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/signer/core/apitypes" "github.com/ethereum/go-ethereum/signer/core/apitypes"
"github.com/status-im/zxcvbn-go" "github.com/status-im/zxcvbn-go"
@ -47,8 +47,18 @@ import (
"github.com/status-im/status-go/services/typeddata" "github.com/status-im/status-go/services/typeddata"
"github.com/status-im/status-go/signal" "github.com/status-im/status-go/signal"
"github.com/status-im/status-go/transactions" "github.com/status-im/status-go/transactions"
"github.com/status-im/status-go/mobile/callog"
) )
func call(fn any, params ...any) any {
return callog.Call(requestlog.GetRequestLogger(), fn, params...)
}
func callWithResponse(fn any, params ...any) string {
return callog.CallWithResponse(requestlog.GetRequestLogger(), fn, params...)
}
type InitializeApplicationResponse struct { type InitializeApplicationResponse struct {
Accounts []multiaccounts.Account `json:"accounts"` Accounts []multiaccounts.Account `json:"accounts"`
CentralizedMetricsInfo *centralizedmetrics.MetricsInfo `json:"centralizedMetricsInfo"` CentralizedMetricsInfo *centralizedmetrics.MetricsInfo `json:"centralizedMetricsInfo"`
@ -366,19 +376,19 @@ func login(accountData, password, configJSON string) error {
} }
api.RunAsync(func() error { api.RunAsync(func() error {
log.Debug("start a node with account", "key-uid", account.KeyUID) logutils.ZapLogger().Debug("start a node with account", zap.String("key-uid", account.KeyUID))
err := statusBackend.UpdateNodeConfigFleet(account, password, &conf) err := statusBackend.UpdateNodeConfigFleet(account, password, &conf)
if err != nil { if err != nil {
log.Error("failed to update node config fleet", "key-uid", account.KeyUID, "error", err) logutils.ZapLogger().Error("failed to update node config fleet", zap.String("key-uid", account.KeyUID), zap.Error(err))
return statusBackend.LoggedIn(account.KeyUID, err) return statusBackend.LoggedIn(account.KeyUID, err)
} }
err = statusBackend.StartNodeWithAccount(account, password, &conf, nil) err = statusBackend.StartNodeWithAccount(account, password, &conf, nil)
if err != nil { if err != nil {
log.Error("failed to start a node", "key-uid", account.KeyUID, "error", err) logutils.ZapLogger().Error("failed to start a node", zap.String("key-uid", account.KeyUID), zap.Error(err))
return err return err
} }
log.Debug("started a node with", "key-uid", account.KeyUID) logutils.ZapLogger().Debug("started a node with", zap.String("key-uid", account.KeyUID))
return nil return nil
}) })
@ -431,18 +441,27 @@ func createAccountAndLogin(requestJSON string) string {
} }
api.RunAsync(func() error { api.RunAsync(func() error {
log.Debug("starting a node and creating config") logutils.ZapLogger().Debug("starting a node and creating config")
_, err := statusBackend.CreateAccountAndLogin(&request) _, err := statusBackend.CreateAccountAndLogin(&request)
if err != nil { if err != nil {
log.Error("failed to create account", "error", err) logutils.ZapLogger().Error("failed to create account", zap.Error(err))
return err return err
} }
log.Debug("started a node, and created account") logutils.ZapLogger().Debug("started a node, and created account")
return nil return nil
}) })
return makeJSONResponse(nil) return makeJSONResponse(nil)
} }
func AcceptTerms() string {
return callWithResponse(acceptTerms)
}
func acceptTerms() string {
err := statusBackend.AcceptTerms()
return makeJSONResponse(err)
}
func LoginAccount(requestJSON string) string { func LoginAccount(requestJSON string) string {
return callWithResponse(loginAccount, requestJSON) return callWithResponse(loginAccount, requestJSON)
} }
@ -462,10 +481,10 @@ func loginAccount(requestJSON string) string {
api.RunAsync(func() error { api.RunAsync(func() error {
err := statusBackend.LoginAccount(&request) err := statusBackend.LoginAccount(&request)
if err != nil { if err != nil {
log.Error("loginAccount failed", "error", err) logutils.ZapLogger().Error("loginAccount failed", zap.Error(err))
return err return err
} }
log.Debug("loginAccount started node") logutils.ZapLogger().Debug("loginAccount started node")
return nil return nil
}) })
return makeJSONResponse(nil) return makeJSONResponse(nil)
@ -488,7 +507,7 @@ func restoreAccountAndLogin(requestJSON string) string {
} }
api.RunAsync(func() error { api.RunAsync(func() error {
log.Debug("starting a node and restoring account") logutils.ZapLogger().Debug("starting a node and restoring account")
if request.Keycard != nil { if request.Keycard != nil {
_, err = statusBackend.RestoreKeycardAccountAndLogin(&request) _, err = statusBackend.RestoreKeycardAccountAndLogin(&request)
@ -497,10 +516,10 @@ func restoreAccountAndLogin(requestJSON string) string {
} }
if err != nil { if err != nil {
log.Error("failed to restore account", "error", err) logutils.ZapLogger().Error("failed to restore account", zap.Error(err))
return err return err
} }
log.Debug("started a node, and restored account") logutils.ZapLogger().Debug("started a node, and restored account")
return nil return nil
}) })
@ -537,13 +556,13 @@ func SaveAccountAndLogin(accountData, password, settingsJSON, configJSON, subacc
} }
api.RunAsync(func() error { api.RunAsync(func() error {
log.Debug("starting a node, and saving account with configuration", "key-uid", account.KeyUID) logutils.ZapLogger().Debug("starting a node, and saving account with configuration", zap.String("key-uid", account.KeyUID))
err := statusBackend.StartNodeWithAccountAndInitialConfig(account, password, settings, &conf, subaccs, nil) err := statusBackend.StartNodeWithAccountAndInitialConfig(account, password, settings, &conf, subaccs, nil)
if err != nil { if err != nil {
log.Error("failed to start node and save account", "key-uid", account.KeyUID, "error", err) logutils.ZapLogger().Error("failed to start node and save account", zap.String("key-uid", account.KeyUID), zap.Error(err))
return err return err
} }
log.Debug("started a node, and saved account", "key-uid", account.KeyUID) logutils.ZapLogger().Debug("started a node, and saved account", zap.String("key-uid", account.KeyUID))
return nil return nil
}) })
return makeJSONResponse(nil) return makeJSONResponse(nil)
@ -625,13 +644,13 @@ func SaveAccountAndLoginWithKeycard(accountData, password, settingsJSON, configJ
} }
api.RunAsync(func() error { api.RunAsync(func() error {
log.Debug("starting a node, and saving account with configuration", "key-uid", account.KeyUID) logutils.ZapLogger().Debug("starting a node, and saving account with configuration", zap.String("key-uid", account.KeyUID))
err := statusBackend.SaveAccountAndStartNodeWithKey(account, password, settings, &conf, subaccs, keyHex) err := statusBackend.SaveAccountAndStartNodeWithKey(account, password, settings, &conf, subaccs, keyHex)
if err != nil { if err != nil {
log.Error("failed to start node and save account", "key-uid", account.KeyUID, "error", err) logutils.ZapLogger().Error("failed to start node and save account", zap.String("key-uid", account.KeyUID), zap.Error(err))
return err return err
} }
log.Debug("started a node, and saved account", "key-uid", account.KeyUID) logutils.ZapLogger().Debug("started a node, and saved account", zap.String("key-uid", account.KeyUID))
return nil return nil
}) })
return makeJSONResponse(nil) return makeJSONResponse(nil)
@ -652,13 +671,13 @@ func LoginWithKeycard(accountData, password, keyHex string, configJSON string) s
return makeJSONResponse(err) return makeJSONResponse(err)
} }
api.RunAsync(func() error { api.RunAsync(func() error {
log.Debug("start a node with account", "key-uid", account.KeyUID) logutils.ZapLogger().Debug("start a node with account", zap.String("key-uid", account.KeyUID))
err := statusBackend.StartNodeWithKey(account, password, keyHex, &conf) err := statusBackend.StartNodeWithKey(account, password, keyHex, &conf)
if err != nil { if err != nil {
log.Error("failed to start a node", "key-uid", account.KeyUID, "error", err) logutils.ZapLogger().Error("failed to start a node", zap.String("key-uid", account.KeyUID), zap.Error(err))
return err return err
} }
log.Debug("started a node with", "key-uid", account.KeyUID) logutils.ZapLogger().Debug("started a node with", zap.String("key-uid", account.KeyUID))
return nil return nil
}) })
return makeJSONResponse(nil) return makeJSONResponse(nil)
@ -946,7 +965,7 @@ func writeHeapProfile(dataDir string) string { //nolint: deadcode
func makeJSONResponse(err error) string { func makeJSONResponse(err error) string {
errString := "" errString := ""
if err != nil { if err != nil {
log.Error("error in makeJSONResponse", "error", err) logutils.ZapLogger().Error("error in makeJSONResponse", zap.Error(err))
errString = err.Error() errString = err.Error()
} }
@ -1641,7 +1660,7 @@ func EncodeTransfer(to string, value string) string {
func encodeTransfer(to string, value string) string { func encodeTransfer(to string, value string) string {
result, err := abi_spec.EncodeTransfer(to, value) result, err := abi_spec.EncodeTransfer(to, value)
if err != nil { if err != nil {
log.Error("failed to encode transfer", "to", to, "value", value, "error", err) logutils.ZapLogger().Error("failed to encode transfer", zap.String("to", to), zap.String("value", value), zap.Error(err))
return "" return ""
} }
return result return result
@ -1654,7 +1673,7 @@ func EncodeFunctionCall(method string, paramsJSON string) string {
func encodeFunctionCall(method string, paramsJSON string) string { func encodeFunctionCall(method string, paramsJSON string) string {
result, err := abi_spec.Encode(method, paramsJSON) result, err := abi_spec.Encode(method, paramsJSON)
if err != nil { if err != nil {
log.Error("failed to encode function call", "method", method, "paramsJSON", paramsJSON, "error", err) logutils.ZapLogger().Error("failed to encode function call", zap.String("method", method), zap.String("paramsJSON", paramsJSON), zap.Error(err))
return "" return ""
} }
return result return result
@ -1671,17 +1690,17 @@ func decodeParameters(decodeParamJSON string) string {
}{} }{}
err := json.Unmarshal([]byte(decodeParamJSON), &decodeParam) err := json.Unmarshal([]byte(decodeParamJSON), &decodeParam)
if err != nil { if err != nil {
log.Error("failed to unmarshal json when decoding parameters", "decodeParamJSON", decodeParamJSON, "error", err) logutils.ZapLogger().Error("failed to unmarshal json when decoding parameters", zap.String("decodeParamJSON", decodeParamJSON), zap.Error(err))
return "" return ""
} }
result, err := abi_spec.Decode(decodeParam.BytesString, decodeParam.Types) result, err := abi_spec.Decode(decodeParam.BytesString, decodeParam.Types)
if err != nil { if err != nil {
log.Error("failed to decode parameters", "decodeParamJSON", decodeParamJSON, "error", err) logutils.ZapLogger().Error("failed to decode parameters", zap.String("decodeParamJSON", decodeParamJSON), zap.Error(err))
return "" return ""
} }
bytes, err := json.Marshal(result) bytes, err := json.Marshal(result)
if err != nil { if err != nil {
log.Error("failed to marshal result", "result", result, "decodeParamJSON", decodeParamJSON, "error", err) logutils.ZapLogger().Error("failed to marshal result", zap.Any("result", result), zap.String("decodeParamJSON", decodeParamJSON), zap.Error(err))
return "" return ""
} }
return string(bytes) return string(bytes)
@ -1714,7 +1733,7 @@ func Utf8ToHex(str string) string {
func utf8ToHex(str string) string { func utf8ToHex(str string) string {
hexString, err := abi_spec.Utf8ToHex(str) hexString, err := abi_spec.Utf8ToHex(str)
if err != nil { if err != nil {
log.Error("failed to convert utf8 to hex", "str", str, "error", err) logutils.ZapLogger().Error("failed to convert utf8 to hex", zap.String("str", str), zap.Error(err))
} }
return hexString return hexString
} }
@ -1726,7 +1745,7 @@ func HexToUtf8(hexString string) string {
func hexToUtf8(hexString string) string { func hexToUtf8(hexString string) string {
str, err := abi_spec.HexToUtf8(hexString) str, err := abi_spec.HexToUtf8(hexString)
if err != nil { if err != nil {
log.Error("failed to convert hex to utf8", "hexString", hexString, "error", err) logutils.ZapLogger().Error("failed to convert hex to utf8", zap.String("hexString", hexString), zap.Error(err))
} }
return str return str
} }
@ -1738,7 +1757,7 @@ func CheckAddressChecksum(address string) string {
func checkAddressChecksum(address string) string { func checkAddressChecksum(address string) string {
valid, err := abi_spec.CheckAddressChecksum(address) valid, err := abi_spec.CheckAddressChecksum(address)
if err != nil { if err != nil {
log.Error("failed to invoke check address checksum", "address", address, "error", err) logutils.ZapLogger().Error("failed to invoke check address checksum", zap.String("address", address), zap.Error(err))
} }
result, _ := json.Marshal(valid) result, _ := json.Marshal(valid)
return string(result) return string(result)
@ -1751,7 +1770,7 @@ func IsAddress(address string) string {
func isAddress(address string) string { func isAddress(address string) string {
valid, err := abi_spec.IsAddress(address) valid, err := abi_spec.IsAddress(address)
if err != nil { if err != nil {
log.Error("failed to invoke IsAddress", "address", address, "error", err) logutils.ZapLogger().Error("failed to invoke IsAddress", zap.String("address", address), zap.Error(err))
} }
result, _ := json.Marshal(valid) result, _ := json.Marshal(valid)
return string(result) return string(result)
@ -1764,7 +1783,7 @@ func ToChecksumAddress(address string) string {
func toChecksumAddress(address string) string { func toChecksumAddress(address string) string {
address, err := abi_spec.ToChecksumAddress(address) address, err := abi_spec.ToChecksumAddress(address)
if err != nil { if err != nil {
log.Error("failed to convert to checksum address", "address", address, "error", err) logutils.ZapLogger().Error("failed to convert to checksum address", zap.String("address", address), zap.Error(err))
} }
return address return address
} }
@ -1796,7 +1815,7 @@ func InitLogging(logSettingsJSON string) string {
} }
if err = logutils.OverrideRootLogWithConfig(logSettings.LogSettings, false); err == nil { if err = logutils.OverrideRootLogWithConfig(logSettings.LogSettings, false); err == nil {
log.Info("logging initialised", "logSettings", logSettingsJSON) logutils.ZapLogger().Info("logging initialised", zap.String("logSettings", logSettingsJSON))
} }
if logSettings.LogRequestGo { if logSettings.LogRequestGo {

View File

@ -2,6 +2,7 @@ package statusgo
import ( import (
"github.com/status-im/status-go/api" "github.com/status-im/status-go/api"
"github.com/status-im/status-go/logutils"
) )
var statusBackend = api.NewGethStatusBackend() var statusBackend = api.NewGethStatusBackend(logutils.ZapLogger())

40
mobile/status_test.go Normal file
View File

@ -0,0 +1,40 @@
package statusgo
import (
"encoding/json"
"testing"
"github.com/stretchr/testify/require"
"github.com/status-im/status-go/multiaccounts"
"github.com/status-im/status-go/multiaccounts/settings"
"github.com/status-im/status-go/signal"
)
type testSignalHandler struct {
receivedSignal string
}
func (t *testSignalHandler) HandleSignal(data string) {
t.receivedSignal = data
}
func TestSetMobileSignalHandler(t *testing.T) {
// Setup
handler := &testSignalHandler{}
SetMobileSignalHandler(handler)
t.Cleanup(signal.ResetMobileSignalHandler)
// Test data
testAccount := &multiaccounts.Account{Name: "test"}
testSettings := &settings.Settings{KeyUID: "0x1"}
testEnsUsernames := json.RawMessage(`{"test": "test"}`)
// Action
signal.SendLoggedIn(testAccount, testSettings, testEnsUsernames, nil)
// Assertions
require.Contains(t, handler.receivedSignal, `"key-uid":"0x1"`, "Signal should contain the correct KeyUID")
require.Contains(t, handler.receivedSignal, `"name":"test"`, "Signal should contain the correct account name")
require.Contains(t, handler.receivedSignal, `"ensUsernames":{"test":"test"}`, "Signal should contain the correct ENS usernames")
}

View File

@ -5,9 +5,9 @@ import (
"database/sql" "database/sql"
"encoding/json" "encoding/json"
"github.com/ethereum/go-ethereum/log"
"github.com/status-im/status-go/common/dbsetup" "github.com/status-im/status-go/common/dbsetup"
"github.com/status-im/status-go/images" "github.com/status-im/status-go/images"
"github.com/status-im/status-go/logutils"
"github.com/status-im/status-go/multiaccounts/common" "github.com/status-im/status-go/multiaccounts/common"
"github.com/status-im/status-go/multiaccounts/migrations" "github.com/status-im/status-go/multiaccounts/migrations"
"github.com/status-im/status-go/protocol/protobuf" "github.com/status-im/status-go/protocol/protobuf"
@ -29,6 +29,9 @@ type Account struct {
Images []images.IdentityImage `json:"images"` Images []images.IdentityImage `json:"images"`
KDFIterations int `json:"kdfIterations,omitempty"` KDFIterations int `json:"kdfIterations,omitempty"`
CustomizationColorClock uint64 `json:"-"` CustomizationColorClock uint64 `json:"-"`
// HasAcceptedTerms will be set to true when the first account is created.
HasAcceptedTerms bool `json:"hasAcceptedTerms"`
} }
func (a *Account) RefersToKeycard() bool { func (a *Account) RefersToKeycard() bool {
@ -145,7 +148,7 @@ func (db *Database) GetAccountKDFIterationsNumber(keyUID string) (kdfIterationsN
} }
func (db *Database) GetAccounts() (rst []Account, err error) { func (db *Database) GetAccounts() (rst []Account, err error) {
rows, err := db.db.Query("SELECT a.name, a.loginTimestamp, a.identicon, a.colorHash, a.colorId, a.customizationColor, a.customizationColorClock, a.keycardPairing, a.keyUid, a.kdfIterations, ii.name, ii.image_payload, ii.width, ii.height, ii.file_size, ii.resize_target, ii.clock FROM accounts AS a LEFT JOIN identity_images AS ii ON ii.key_uid = a.keyUid ORDER BY loginTimestamp DESC") rows, err := db.db.Query("SELECT a.name, a.loginTimestamp, a.identicon, a.colorHash, a.colorId, a.customizationColor, a.customizationColorClock, a.keycardPairing, a.keyUid, a.kdfIterations, a.hasAcceptedTerms, ii.name, ii.image_payload, ii.width, ii.height, ii.file_size, ii.resize_target, ii.clock FROM accounts AS a LEFT JOIN identity_images AS ii ON ii.key_uid = a.keyUid ORDER BY loginTimestamp DESC")
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -179,6 +182,7 @@ func (db *Database) GetAccounts() (rst []Account, err error) {
&acc.KeycardPairing, &acc.KeycardPairing,
&acc.KeyUID, &acc.KeyUID,
&acc.KDFIterations, &acc.KDFIterations,
&acc.HasAcceptedTerms,
&iiName, &iiName,
&ii.Payload, &ii.Payload,
&iiWidth, &iiWidth,
@ -236,8 +240,14 @@ func (db *Database) GetAccounts() (rst []Account, err error) {
return rst, nil return rst, nil
} }
func (db *Database) GetAccountsCount() (int, error) {
var count int
err := db.db.QueryRow("SELECT COUNT(1) FROM accounts").Scan(&count)
return count, err
}
func (db *Database) GetAccount(keyUID string) (*Account, error) { func (db *Database) GetAccount(keyUID string) (*Account, error) {
rows, err := db.db.Query("SELECT a.name, a.loginTimestamp, a.identicon, a.colorHash, a.colorId, a.customizationColor, a.customizationColorClock, a.keycardPairing, a.keyUid, a.kdfIterations, ii.key_uid, ii.name, ii.image_payload, ii.width, ii.height, ii.file_size, ii.resize_target, ii.clock FROM accounts AS a LEFT JOIN identity_images AS ii ON ii.key_uid = a.keyUid WHERE a.keyUid = ? ORDER BY loginTimestamp DESC", keyUID) rows, err := db.db.Query("SELECT a.name, a.loginTimestamp, a.identicon, a.colorHash, a.colorId, a.customizationColor, a.customizationColorClock, a.keycardPairing, a.keyUid, a.kdfIterations, a.hasAcceptedTerms, ii.key_uid, ii.name, ii.image_payload, ii.width, ii.height, ii.file_size, ii.resize_target, ii.clock FROM accounts AS a LEFT JOIN identity_images AS ii ON ii.key_uid = a.keyUid WHERE a.keyUid = ? ORDER BY loginTimestamp DESC", keyUID)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -273,6 +283,7 @@ func (db *Database) GetAccount(keyUID string) (*Account, error) {
&acc.KeycardPairing, &acc.KeycardPairing,
&acc.KeyUID, &acc.KeyUID,
&acc.KDFIterations, &acc.KDFIterations,
&acc.HasAcceptedTerms,
&iiKeyUID, &iiKeyUID,
&iiName, &iiName,
&ii.Payload, &ii.Payload,
@ -323,7 +334,7 @@ func (db *Database) SaveAccount(account Account) error {
account.KDFIterations = dbsetup.ReducedKDFIterationsNumber account.KDFIterations = dbsetup.ReducedKDFIterationsNumber
} }
_, err = db.db.Exec("INSERT OR REPLACE INTO accounts (name, identicon, colorHash, colorId, customizationColor, customizationColorClock, keycardPairing, keyUid, kdfIterations, loginTimestamp) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)", account.Name, account.Identicon, colorHash, account.ColorID, account.CustomizationColor, account.CustomizationColorClock, account.KeycardPairing, account.KeyUID, account.KDFIterations, account.Timestamp) _, err = db.db.Exec("INSERT OR REPLACE INTO accounts (name, identicon, colorHash, colorId, customizationColor, customizationColorClock, keycardPairing, keyUid, kdfIterations, loginTimestamp, hasAcceptedTerms) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)", account.Name, account.Identicon, colorHash, account.ColorID, account.CustomizationColor, account.CustomizationColorClock, account.KeycardPairing, account.KeyUID, account.KDFIterations, account.Timestamp, account.HasAcceptedTerms)
if err != nil { if err != nil {
return err return err
} }
@ -340,6 +351,11 @@ func (db *Database) UpdateDisplayName(keyUID string, displayName string) error {
return err return err
} }
func (db *Database) UpdateHasAcceptedTerms(keyUID string, hasAcceptedTerms bool) error {
_, err := db.db.Exec("UPDATE accounts SET hasAcceptedTerms = ? WHERE keyUid = ?", hasAcceptedTerms, keyUID)
return err
}
func (db *Database) UpdateAccount(account Account) error { func (db *Database) UpdateAccount(account Account) error {
colorHash, err := json.Marshal(account.ColorHash) colorHash, err := json.Marshal(account.ColorHash)
if err != nil { if err != nil {
@ -350,7 +366,7 @@ func (db *Database) UpdateAccount(account Account) error {
account.KDFIterations = dbsetup.ReducedKDFIterationsNumber account.KDFIterations = dbsetup.ReducedKDFIterationsNumber
} }
_, err = db.db.Exec("UPDATE accounts SET name = ?, identicon = ?, colorHash = ?, colorId = ?, customizationColor = ?, customizationColorClock = ?, keycardPairing = ?, kdfIterations = ? WHERE keyUid = ?", account.Name, account.Identicon, colorHash, account.ColorID, account.CustomizationColor, account.CustomizationColorClock, account.KeycardPairing, account.KDFIterations, account.KeyUID) _, err = db.db.Exec("UPDATE accounts SET name = ?, identicon = ?, colorHash = ?, colorId = ?, customizationColor = ?, customizationColorClock = ?, keycardPairing = ?, kdfIterations = ?, hasAcceptedTerms = ? WHERE keyUid = ?", account.Name, account.Identicon, colorHash, account.ColorID, account.CustomizationColor, account.CustomizationColorClock, account.KeycardPairing, account.KDFIterations, account.HasAcceptedTerms, account.KeyUID)
return err return err
} }
@ -468,7 +484,7 @@ func (db *Database) publishOnIdentityImageSubscriptions(change *IdentityImageSub
select { select {
case s <- change: case s <- change:
default: default:
log.Warn("subscription channel full, dropping message") logutils.ZapLogger().Warn("subscription channel full, dropping message")
} }
} }
} }

View File

@ -4,6 +4,7 @@ import (
"encoding/json" "encoding/json"
"io/ioutil" "io/ioutil"
"os" "os"
"strings"
"testing" "testing"
"github.com/status-im/status-go/common/dbsetup" "github.com/status-im/status-go/common/dbsetup"
@ -39,10 +40,17 @@ func TestAccounts(t *testing.T) {
func TestAccountsUpdate(t *testing.T) { func TestAccountsUpdate(t *testing.T) {
db, stop := setupTestDB(t) db, stop := setupTestDB(t)
defer stop() defer stop()
expected := Account{KeyUID: "string", CustomizationColor: common.CustomizationColorBlue, ColorHash: ColorHash{{4, 3}, {4, 0}, {4, 3}, {4, 0}}, ColorID: 10, KDFIterations: dbsetup.ReducedKDFIterationsNumber} expected := Account{
KeyUID: "string",
CustomizationColor: common.CustomizationColorBlue,
ColorHash: ColorHash{{4, 3}, {4, 0}, {4, 3}, {4, 0}},
ColorID: 10,
KDFIterations: dbsetup.ReducedKDFIterationsNumber,
}
require.NoError(t, db.SaveAccount(expected)) require.NoError(t, db.SaveAccount(expected))
expected.Name = "chars" expected.Name = "chars"
expected.CustomizationColor = common.CustomizationColorMagenta expected.CustomizationColor = common.CustomizationColorMagenta
expected.HasAcceptedTerms = true
require.NoError(t, db.UpdateAccount(expected)) require.NoError(t, db.UpdateAccount(expected))
rst, err := db.GetAccounts() rst, err := db.GetAccounts()
require.NoError(t, err) require.NoError(t, err)
@ -50,6 +58,53 @@ func TestAccountsUpdate(t *testing.T) {
require.Equal(t, expected, rst[0]) require.Equal(t, expected, rst[0])
} }
func TestUpdateHasAcceptedTerms(t *testing.T) {
db, stop := setupTestDB(t)
defer stop()
keyUID := "string"
expected := Account{
KeyUID: keyUID,
KDFIterations: dbsetup.ReducedKDFIterationsNumber,
}
require.NoError(t, db.SaveAccount(expected))
accounts, err := db.GetAccounts()
require.NoError(t, err)
require.Equal(t, []Account{expected}, accounts)
// Update from false -> true
require.NoError(t, db.UpdateHasAcceptedTerms(keyUID, true))
account, err := db.GetAccount(keyUID)
require.NoError(t, err)
expected.HasAcceptedTerms = true
require.Equal(t, &expected, account)
// Update from true -> false
require.NoError(t, db.UpdateHasAcceptedTerms(keyUID, false))
account, err = db.GetAccount(keyUID)
require.NoError(t, err)
expected.HasAcceptedTerms = false
require.Equal(t, &expected, account)
}
func TestDatabase_GetAccountsCount(t *testing.T) {
db, stop := setupTestDB(t)
defer stop()
count, err := db.GetAccountsCount()
require.NoError(t, err)
require.Equal(t, 0, count)
account := Account{
KeyUID: keyUID,
KDFIterations: dbsetup.ReducedKDFIterationsNumber,
}
require.NoError(t, db.SaveAccount(account))
count, err = db.GetAccountsCount()
require.NoError(t, err)
require.Equal(t, 1, count)
}
func TestLoginUpdate(t *testing.T) { func TestLoginUpdate(t *testing.T) {
db, stop := setupTestDB(t) db, stop := setupTestDB(t)
defer stop() defer stop()
@ -148,20 +203,26 @@ func TestDatabase_DeleteIdentityImage(t *testing.T) {
require.Empty(t, oii) require.Empty(t, oii)
} }
func removeAllWhitespace(s string) string {
tmp := strings.ReplaceAll(s, " ", "")
tmp = strings.ReplaceAll(tmp, "\n", "")
tmp = strings.ReplaceAll(tmp, "\t", "")
return tmp
}
func TestDatabase_GetAccountsWithIdentityImages(t *testing.T) { func TestDatabase_GetAccountsWithIdentityImages(t *testing.T) {
db, stop := setupTestDB(t) db, stop := setupTestDB(t)
defer stop() defer stop()
testAccs := []Account{ testAccs := []Account{
{Name: "string", KeyUID: keyUID, Identicon: "data"}, {Name: "string", KeyUID: keyUID, Identicon: "data", HasAcceptedTerms: true},
{Name: "string", KeyUID: keyUID2}, {Name: "string", KeyUID: keyUID2},
{Name: "string", KeyUID: keyUID2 + "2"}, {Name: "string", KeyUID: keyUID2 + "2"},
{Name: "string", KeyUID: keyUID2 + "3"}, {Name: "string", KeyUID: keyUID2 + "3"},
} }
expected := `[{"name":"string","timestamp":100,"identicon":"data","colorHash":null,"colorId":0,"keycard-pairing":"","key-uid":"0xdeadbeef","images":[{"keyUid":"0xdeadbeef","type":"large","uri":"data:image/png;base64,iVBORw0KGgoAAAANSUg=","width":240,"height":300,"fileSize":1024,"resizeTarget":240,"clock":0},{"keyUid":"0xdeadbeef","type":"thumbnail","uri":"data:image/jpeg;base64,/9j/2wCEAFA3PEY8MlA=","width":80,"height":80,"fileSize":256,"resizeTarget":80,"clock":0}],"kdfIterations":3200},{"name":"string","timestamp":10,"identicon":"","colorHash":null,"colorId":0,"keycard-pairing":"","key-uid":"0x1337beef","images":null,"kdfIterations":3200},{"name":"string","timestamp":0,"identicon":"","colorHash":null,"colorId":0,"keycard-pairing":"","key-uid":"0x1337beef2","images":null,"kdfIterations":3200},{"name":"string","timestamp":0,"identicon":"","colorHash":null,"colorId":0,"keycard-pairing":"","key-uid":"0x1337beef3","images":[{"keyUid":"0x1337beef3","type":"large","uri":"data:image/png;base64,iVBORw0KGgoAAAANSUg=","width":240,"height":300,"fileSize":1024,"resizeTarget":240,"clock":0},{"keyUid":"0x1337beef3","type":"thumbnail","uri":"data:image/jpeg;base64,/9j/2wCEAFA3PEY8MlA=","width":80,"height":80,"fileSize":256,"resizeTarget":80,"clock":0}],"kdfIterations":3200}]`
for _, a := range testAccs { for _, a := range testAccs {
require.NoError(t, db.SaveAccount(a)) require.NoError(t, db.SaveAccount(a), a.KeyUID)
} }
seedTestDBWithIdentityImages(t, db, keyUID) seedTestDBWithIdentityImages(t, db, keyUID)
@ -178,14 +239,116 @@ func TestDatabase_GetAccountsWithIdentityImages(t *testing.T) {
accJSON, err := json.Marshal(accs) accJSON, err := json.Marshal(accs)
require.NoError(t, err) require.NoError(t, err)
require.Exactly(t, expected, string(accJSON)) expected := `
[
{
"name": "string",
"timestamp": 100,
"identicon": "data",
"colorHash": null,
"colorId": 0,
"keycard-pairing": "",
"key-uid": "0xdeadbeef",
"images": [
{
"keyUid": "0xdeadbeef",
"type": "large",
"uri": "data:image/png;base64,iVBORw0KGgoAAAANSUg=",
"width": 240,
"height": 300,
"fileSize": 1024,
"resizeTarget": 240,
"clock": 0
},
{
"keyUid": "0xdeadbeef",
"type": "thumbnail",
"uri": "data:image/jpeg;base64,/9j/2wCEAFA3PEY8MlA=",
"width": 80,
"height": 80,
"fileSize": 256,
"resizeTarget": 80,
"clock": 0
}
],
"kdfIterations": 3200,
"hasAcceptedTerms": true
},
{
"name": "string",
"timestamp": 10,
"identicon": "",
"colorHash": null,
"colorId": 0,
"keycard-pairing": "",
"key-uid": "0x1337beef",
"images": null,
"kdfIterations": 3200,
"hasAcceptedTerms": false
},
{
"name": "string",
"timestamp": 0,
"identicon": "",
"colorHash": null,
"colorId": 0,
"keycard-pairing": "",
"key-uid": "0x1337beef2",
"images": null,
"kdfIterations": 3200,
"hasAcceptedTerms": false
},
{
"name": "string",
"timestamp": 0,
"identicon": "",
"colorHash": null,
"colorId": 0,
"keycard-pairing": "",
"key-uid": "0x1337beef3",
"images": [
{
"keyUid": "0x1337beef3",
"type": "large",
"uri": "data:image/png;base64,iVBORw0KGgoAAAANSUg=",
"width": 240,
"height": 300,
"fileSize": 1024,
"resizeTarget": 240,
"clock": 0
},
{
"keyUid": "0x1337beef3",
"type": "thumbnail",
"uri": "data:image/jpeg;base64,/9j/2wCEAFA3PEY8MlA=",
"width": 80,
"height": 80,
"fileSize": 256,
"resizeTarget": 80,
"clock": 0
}
],
"kdfIterations": 3200,
"hasAcceptedTerms": false
}
]
`
require.Exactly(t, removeAllWhitespace(expected), string(accJSON))
} }
func TestDatabase_GetAccount(t *testing.T) { func TestDatabase_GetAccount(t *testing.T) {
db, stop := setupTestDB(t) db, stop := setupTestDB(t)
defer stop() defer stop()
expected := Account{Name: "string", KeyUID: keyUID, ColorHash: ColorHash{{4, 3}, {4, 0}, {4, 3}, {4, 0}}, ColorID: 10, KDFIterations: dbsetup.ReducedKDFIterationsNumber} expected := Account{
Name: "string",
KeyUID: keyUID,
ColorHash: ColorHash{{4, 3}, {4, 0}, {4, 3}, {4, 0}},
ColorID: 10,
KDFIterations: dbsetup.ReducedKDFIterationsNumber,
HasAcceptedTerms: true,
}
require.NoError(t, db.SaveAccount(expected)) require.NoError(t, db.SaveAccount(expected))
account, err := db.GetAccount(expected.KeyUID) account, err := db.GetAccount(expected.KeyUID)

View File

@ -0,0 +1 @@
ALTER TABLE accounts ADD COLUMN hasAcceptedTerms BOOLEAN NOT NULL DEFAULT FALSE;

View File

@ -8,10 +8,9 @@ import (
"sync" "sync"
"time" "time"
"github.com/ethereum/go-ethereum/log"
"github.com/status-im/status-go/common/dbsetup" "github.com/status-im/status-go/common/dbsetup"
"github.com/status-im/status-go/eth-node/types" "github.com/status-im/status-go/eth-node/types"
"github.com/status-im/status-go/logutils"
"github.com/status-im/status-go/multiaccounts/errors" "github.com/status-im/status-go/multiaccounts/errors"
"github.com/status-im/status-go/nodecfg" "github.com/status-im/status-go/nodecfg"
"github.com/status-im/status-go/params" "github.com/status-im/status-go/params"
@ -836,7 +835,7 @@ func (db *Database) postChangesToSubscribers(change *SyncSettingField) {
select { select {
case s <- change: case s <- change:
default: default:
log.Warn("settings changes subscription channel full, dropping message") logutils.ZapLogger().Warn("settings changes subscription channel full, dropping message")
} }
} }
} }

View File

@ -16,6 +16,7 @@ in stdenv.mkDerivation rec {
url = "https://cli.codecov.io/v${version}/${platform}/codecov"; url = "https://cli.codecov.io/v${version}/${platform}/codecov";
hash = lib.getAttr builtins.currentSystem { hash = lib.getAttr builtins.currentSystem {
aarch64-darwin = "sha256-CB1D8/zYF23Jes9sd6rJiadDg7nwwee9xWSYqSByAlU="; aarch64-darwin = "sha256-CB1D8/zYF23Jes9sd6rJiadDg7nwwee9xWSYqSByAlU=";
x86_64-darwin = "sha256-CB1D8/zYF23Jes9sd6rJiadDg7nwwee9xWSYqSByAlU=";
x86_64-linux = "sha256-65AgCcuAD977zikcE1eVP4Dik4L0PHqYzOO1fStNjOw="; x86_64-linux = "sha256-65AgCcuAD977zikcE1eVP4Dik4L0PHqYzOO1fStNjOw=";
aarch64-linux = "sha256-hALtVSXY40uTIaAtwWr7EXh7zclhK63r7a341Tn+q/g="; aarch64-linux = "sha256-hALtVSXY40uTIaAtwWr7EXh7zclhK63r7a341Tn+q/g=";
}; };

View File

@ -16,7 +16,12 @@ let
inherit xcodeWrapper; inherit xcodeWrapper;
withAndroidPkgs = !isMacM1; withAndroidPkgs = !isMacM1;
}; };
in pkgs.mkShell { /* Override the default SDK to enable darwin-x86_64 builds */
appleSdk11Stdenv = pkgs.overrideSDK pkgs.stdenv "11.0";
sdk11mkShell = pkgs.mkShell.override { stdenv = appleSdk11Stdenv; };
mkShell = if stdenv.isDarwin then sdk11mkShell else pkgs.mkShell;
in mkShell {
name = "status-go-shell"; name = "status-go-shell";
buildInputs = with pkgs; [ buildInputs = with pkgs; [

View File

@ -12,10 +12,10 @@ import (
"sync" "sync"
"github.com/syndtr/goleveldb/leveldb" "github.com/syndtr/goleveldb/leveldb"
"go.uber.org/zap"
"github.com/ethereum/go-ethereum/accounts" "github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/event" "github.com/ethereum/go-ethereum/event"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/node" "github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p" "github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/p2p/enode" "github.com/ethereum/go-ethereum/p2p/enode"
@ -95,7 +95,7 @@ type StatusNode struct {
peerPool *peers.PeerPool peerPool *peers.PeerPool
db *leveldb.DB // used as a cache for PeerPool db *leveldb.DB // used as a cache for PeerPool
log log.Logger logger *zap.Logger
gethAccountManager *account.GethManager gethAccountManager *account.GethManager
accountsManager *accounts.Manager accountsManager *accounts.Manager
@ -141,11 +141,12 @@ type StatusNode struct {
} }
// New makes new instance of StatusNode. // New makes new instance of StatusNode.
func New(transactor *transactions.Transactor) *StatusNode { func New(transactor *transactions.Transactor, logger *zap.Logger) *StatusNode {
logger = logger.Named("StatusNode")
return &StatusNode{ return &StatusNode{
gethAccountManager: account.NewGethManager(), gethAccountManager: account.NewGethManager(logger),
transactor: transactor, transactor: transactor,
log: log.New("package", "status-go/node.StatusNode"), logger: logger,
publicMethods: make(map[string]bool), publicMethods: make(map[string]bool),
} }
} }
@ -204,7 +205,7 @@ type StartOptions struct {
// The server can only handle requests that don't require appdb or IPFS downloader // The server can only handle requests that don't require appdb or IPFS downloader
func (n *StatusNode) StartMediaServerWithoutDB() error { func (n *StatusNode) StartMediaServerWithoutDB() error {
if n.isRunning() { if n.isRunning() {
n.log.Debug("node is already running, no need to StartMediaServerWithoutDB") n.logger.Debug("node is already running, no need to StartMediaServerWithoutDB")
return nil return nil
} }
@ -235,13 +236,13 @@ func (n *StatusNode) StartWithOptions(config *params.NodeConfig, options StartOp
defer n.mu.Unlock() defer n.mu.Unlock()
if n.isRunning() { if n.isRunning() {
n.log.Debug("node is already running") n.logger.Debug("node is already running")
return ErrNodeRunning return ErrNodeRunning
} }
n.accountsManager = options.AccountsManager n.accountsManager = options.AccountsManager
n.log.Debug("starting with options", "ClusterConfig", config.ClusterConfig) n.logger.Debug("starting with options", zap.Stringer("ClusterConfig", &config.ClusterConfig))
db, err := db.Create(config.DataDir, params.StatusDatabase) db, err := db.Create(config.DataDir, params.StatusDatabase)
if err != nil { if err != nil {
@ -259,7 +260,7 @@ func (n *StatusNode) StartWithOptions(config *params.NodeConfig, options StartOp
if err != nil { if err != nil {
if dberr := db.Close(); dberr != nil { if dberr := db.Close(); dberr != nil {
n.log.Error("error while closing leveldb after node crash", "error", dberr) n.logger.Error("error while closing leveldb after node crash", zap.Error(dberr))
} }
n.db = nil n.db = nil
return err return err
@ -364,7 +365,7 @@ func (n *StatusNode) discoverNode() (*enode.Node, error) {
return discNode, nil return discNode, nil
} }
n.log.Info("Using AdvertiseAddr for rendezvous", "addr", n.config.AdvertiseAddr) n.logger.Info("Using AdvertiseAddr for rendezvous", zap.String("addr", n.config.AdvertiseAddr))
r := discNode.Record() r := discNode.Record()
r.Set(enr.IP(net.ParseIP(n.config.AdvertiseAddr))) r.Set(enr.IP(net.ParseIP(n.config.AdvertiseAddr)))
@ -406,11 +407,10 @@ func (n *StatusNode) startDiscovery() error {
} else { } else {
n.discovery = discoveries[0] n.discovery = discoveries[0]
} }
log.Debug( n.logger.Debug("using discovery",
"using discovery", zap.Any("instance", reflect.TypeOf(n.discovery)),
"instance", reflect.TypeOf(n.discovery), zap.Any("registerTopics", n.config.RegisterTopics),
"registerTopics", n.config.RegisterTopics, zap.Any("requireTopics", n.config.RequireTopics),
"requireTopics", n.config.RequireTopics,
) )
n.register = peers.NewRegister(n.discovery, n.config.RegisterTopics...) n.register = peers.NewRegister(n.discovery, n.config.RegisterTopics...)
options := peers.NewDefaultOptions() options := peers.NewDefaultOptions()
@ -449,7 +449,7 @@ func (n *StatusNode) Stop() error {
func (n *StatusNode) stop() error { func (n *StatusNode) stop() error {
if n.isDiscoveryRunning() { if n.isDiscoveryRunning() {
if err := n.stopDiscovery(); err != nil { if err := n.stopDiscovery(); err != nil {
n.log.Error("Error stopping the discovery components", "error", err) n.logger.Error("Error stopping the discovery components", zap.Error(err))
} }
n.register = nil n.register = nil
n.peerPool = nil n.peerPool = nil
@ -478,7 +478,7 @@ func (n *StatusNode) stop() error {
if n.db != nil { if n.db != nil {
if err = n.db.Close(); err != nil { if err = n.db.Close(); err != nil {
n.log.Error("Error closing the leveldb of status node", "error", err) n.logger.Error("Error closing the leveldb of status node", zap.Error(err))
return err return err
} }
n.db = nil n.db = nil
@ -509,7 +509,7 @@ func (n *StatusNode) stop() error {
n.publicMethods = make(map[string]bool) n.publicMethods = make(map[string]bool)
n.pendingTracker = nil n.pendingTracker = nil
n.appGeneralSrvc = nil n.appGeneralSrvc = nil
n.log.Debug("status node stopped") n.logger.Debug("status node stopped")
return nil return nil
} }
@ -538,7 +538,7 @@ func (n *StatusNode) ResetChainData(config *params.NodeConfig) error {
} }
err := os.RemoveAll(chainDataDir) err := os.RemoveAll(chainDataDir)
if err == nil { if err == nil {
n.log.Info("Chain data has been removed", "dir", chainDataDir) n.logger.Info("Chain data has been removed", zap.String("dir", chainDataDir))
} }
return err return err
} }
@ -558,16 +558,16 @@ func (n *StatusNode) isRunning() bool {
// populateStaticPeers connects current node with our publicly available LES/SHH/Swarm cluster // populateStaticPeers connects current node with our publicly available LES/SHH/Swarm cluster
func (n *StatusNode) populateStaticPeers() error { func (n *StatusNode) populateStaticPeers() error {
if !n.config.ClusterConfig.Enabled { if !n.config.ClusterConfig.Enabled {
n.log.Info("Static peers are disabled") n.logger.Info("Static peers are disabled")
return nil return nil
} }
for _, enode := range n.config.ClusterConfig.StaticNodes { for _, enode := range n.config.ClusterConfig.StaticNodes {
if err := n.addPeer(enode); err != nil { if err := n.addPeer(enode); err != nil {
n.log.Error("Static peer addition failed", "error", err) n.logger.Error("Static peer addition failed", zap.Error(err))
return err return err
} }
n.log.Info("Static peer added", "enode", enode) n.logger.Info("Static peer added", zap.String("enode", enode))
} }
return nil return nil
@ -575,16 +575,16 @@ func (n *StatusNode) populateStaticPeers() error {
func (n *StatusNode) removeStaticPeers() error { func (n *StatusNode) removeStaticPeers() error {
if !n.config.ClusterConfig.Enabled { if !n.config.ClusterConfig.Enabled {
n.log.Info("Static peers are disabled") n.logger.Info("Static peers are disabled")
return nil return nil
} }
for _, enode := range n.config.ClusterConfig.StaticNodes { for _, enode := range n.config.ClusterConfig.StaticNodes {
if err := n.removePeer(enode); err != nil { if err := n.removePeer(enode); err != nil {
n.log.Error("Static peer deletion failed", "error", err) n.logger.Error("Static peer deletion failed", zap.Error(err))
return err return err
} }
n.log.Info("Static peer deleted", "enode", enode) n.logger.Info("Static peer deleted", zap.String("enode", enode))
} }
return nil return nil
} }

View File

@ -7,9 +7,9 @@ import (
"path/filepath" "path/filepath"
"github.com/syndtr/goleveldb/leveldb" "github.com/syndtr/goleveldb/leveldb"
"go.uber.org/zap"
"github.com/ethereum/go-ethereum/accounts" "github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/node" "github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p" "github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/p2p/discv5" "github.com/ethereum/go-ethereum/p2p/discv5"
@ -17,6 +17,7 @@ import (
"github.com/ethereum/go-ethereum/p2p/nat" "github.com/ethereum/go-ethereum/p2p/nat"
"github.com/status-im/status-go/eth-node/crypto" "github.com/status-im/status-go/eth-node/crypto"
"github.com/status-im/status-go/logutils"
"github.com/status-im/status-go/params" "github.com/status-im/status-go/params"
) )
@ -33,7 +34,7 @@ var (
) )
// All general log messages in this package should be routed through this logger. // All general log messages in this package should be routed through this logger.
var logger = log.New("package", "status-go/node") var logger = logutils.ZapLogger().Named("node")
// MakeNode creates a geth node entity // MakeNode creates a geth node entity
func MakeNode(config *params.NodeConfig, accs *accounts.Manager, db *leveldb.DB) (*node.Node, error) { func MakeNode(config *params.NodeConfig, accs *accounts.Manager, db *leveldb.DB) (*node.Node, error) {
@ -146,7 +147,7 @@ func parseNodes(enodes []string) []*enode.Node {
if err == nil { if err == nil {
nodes = append(nodes, parsedPeer) nodes = append(nodes, parsedPeer)
} else { } else {
logger.Error("Failed to parse enode", "enode", item, "err", err) logger.Error("Failed to parse enode", zap.String("enode", item), zap.Error(err))
} }
} }
@ -162,7 +163,7 @@ func parseNodesV5(enodes []string) []*discv5.Node {
if err == nil { if err == nil {
nodes = append(nodes, parsedPeer) nodes = append(nodes, parsedPeer)
} else { } else {
logger.Error("Failed to parse enode", "enode", enode, "err", err) logger.Error("Failed to parse enode", zap.String("enode", enode), zap.Error(err))
} }
} }
return nodes return nodes

View File

@ -8,12 +8,15 @@ import (
"testing" "testing"
"time" "time"
"go.uber.org/zap"
gethnode "github.com/ethereum/go-ethereum/node" gethnode "github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p" "github.com/ethereum/go-ethereum/p2p"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"github.com/status-im/status-go/params" "github.com/status-im/status-go/params"
"github.com/status-im/status-go/protocol/tt"
"github.com/status-im/status-go/t/helpers" "github.com/status-im/status-go/t/helpers"
"github.com/status-im/status-go/t/utils" "github.com/status-im/status-go/t/utils"
) )
@ -21,7 +24,7 @@ import (
func TestStatusNodeStart(t *testing.T) { func TestStatusNodeStart(t *testing.T) {
config, err := utils.MakeTestNodeConfigWithDataDir("", "", params.StatusChainNetworkID) config, err := utils.MakeTestNodeConfigWithDataDir("", "", params.StatusChainNetworkID)
require.NoError(t, err) require.NoError(t, err)
n := New(nil) n := New(nil, tt.MustCreateTestLogger())
// checks before node is started // checks before node is started
require.Nil(t, n.GethNode()) require.Nil(t, n.GethNode())
@ -33,7 +36,7 @@ func TestStatusNodeStart(t *testing.T) {
defer func() { defer func() {
err := stop() err := stop()
if err != nil { if err != nil {
n.log.Error("stopping db", err) n.logger.Error("stopping db", zap.Error(err))
} }
}() }()
require.NoError(t, err) require.NoError(t, err)
@ -83,13 +86,13 @@ func TestStatusNodeWithDataDir(t *testing.T) {
defer func() { defer func() {
err := stop1() err := stop1()
if err != nil { if err != nil {
n.log.Error("stopping db", err) n.logger.Error("stopping db", zap.Error(err))
} }
}() }()
defer func() { defer func() {
err := stop2() err := stop2()
if err != nil { if err != nil {
n.log.Error("stopping multiaccount db", err) n.logger.Error("stopping multiaccount db", zap.Error(err))
} }
}() }()
require.NoError(t, err) require.NoError(t, err)
@ -118,13 +121,13 @@ func TestStatusNodeAddPeer(t *testing.T) {
defer func() { defer func() {
err := stop1() err := stop1()
if err != nil { if err != nil {
n.log.Error("stopping db", err) n.logger.Error("stopping db", zap.Error(err))
} }
}() }()
defer func() { defer func() {
err := stop2() err := stop2()
if err != nil { if err != nil {
n.log.Error("stopping multiaccount db", err) n.logger.Error("stopping multiaccount db", zap.Error(err))
} }
}() }()
require.NoError(t, err) require.NoError(t, err)
@ -157,13 +160,13 @@ func TestStatusNodeDiscoverNode(t *testing.T) {
defer func() { defer func() {
err := stop1() err := stop1()
if err != nil { if err != nil {
n.log.Error("stopping db", err) n.logger.Error("stopping db", zap.Error(err))
} }
}() }()
defer func() { defer func() {
err := stop2() err := stop2()
if err != nil { if err != nil {
n.log.Error("stopping multiaccount db", err) n.logger.Error("stopping multiaccount db", zap.Error(err))
} }
}() }()
require.NoError(t, err) require.NoError(t, err)
@ -183,13 +186,13 @@ func TestStatusNodeDiscoverNode(t *testing.T) {
defer func() { defer func() {
err := stop11() err := stop11()
if err != nil { if err != nil {
n1.log.Error("stopping db", err) n1.logger.Error("stopping db", zap.Error(err))
} }
}() }()
defer func() { defer func() {
err := stop12() err := stop12()
if err != nil { if err != nil {
n1.log.Error("stopping multiaccount db", err) n1.logger.Error("stopping multiaccount db", zap.Error(err))
} }
}() }()
require.NoError(t, err) require.NoError(t, err)

View File

@ -10,10 +10,12 @@ import (
"testing" "testing"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"go.uber.org/zap"
"github.com/status-im/status-go/appdatabase" "github.com/status-im/status-go/appdatabase"
"github.com/status-im/status-go/multiaccounts" "github.com/status-im/status-go/multiaccounts"
"github.com/status-im/status-go/params" "github.com/status-im/status-go/params"
"github.com/status-im/status-go/protocol/tt"
"github.com/status-im/status-go/t/helpers" "github.com/status-im/status-go/t/helpers"
"github.com/status-im/status-go/walletdatabase" "github.com/status-im/status-go/walletdatabase"
) )
@ -66,13 +68,13 @@ func setupTestMultiDB() (*multiaccounts.Database, func() error, error) {
} }
func createAndStartStatusNode(config *params.NodeConfig) (*StatusNode, error) { func createAndStartStatusNode(config *params.NodeConfig) (*StatusNode, error) {
statusNode := New(nil) statusNode := New(nil, tt.MustCreateTestLogger())
appDB, walletDB, stop, err := setupTestDBs() appDB, walletDB, stop, err := setupTestDBs()
defer func() { defer func() {
err := stop() err := stop()
if err != nil { if err != nil {
statusNode.log.Error("stopping db", err) statusNode.logger.Error("stopping db", zap.Error(err))
} }
}() }()
if err != nil { if err != nil {
@ -85,7 +87,7 @@ func createAndStartStatusNode(config *params.NodeConfig) (*StatusNode, error) {
defer func() { defer func() {
err := stop2() err := stop2()
if err != nil { if err != nil {
statusNode.log.Error("stopping multiaccount db", err) statusNode.logger.Error("stopping multiaccount db", zap.Error(err))
} }
}() }()
if err != nil { if err != nil {
@ -106,7 +108,7 @@ func createStatusNode() (*StatusNode, func() error, func() error, error) {
if err != nil { if err != nil {
return nil, nil, nil, err return nil, nil, nil, err
} }
statusNode := New(nil) statusNode := New(nil, tt.MustCreateTestLogger())
statusNode.SetAppDB(appDB) statusNode.SetAppDB(appDB)
statusNode.SetWalletDB(walletDB) statusNode.SetWalletDB(walletDB)

View File

@ -10,6 +10,8 @@ import (
"reflect" "reflect"
"time" "time"
"go.uber.org/zap"
"github.com/status-im/status-go/protocol/common/shard" "github.com/status-im/status-go/protocol/common/shard"
"github.com/status-im/status-go/server" "github.com/status-im/status-go/server"
"github.com/status-im/status-go/signal" "github.com/status-im/status-go/signal"
@ -657,7 +659,7 @@ func (b *StatusNode) StopLocalNotifications() error {
if b.localNotificationsSrvc.IsStarted() { if b.localNotificationsSrvc.IsStarted() {
err := b.localNotificationsSrvc.Stop() err := b.localNotificationsSrvc.Stop()
if err != nil { if err != nil {
b.log.Error("LocalNotifications service stop failed on StopLocalNotifications", "error", err) b.logger.Error("LocalNotifications service stop failed on StopLocalNotifications", zap.Error(err))
return nil return nil
} }
} }
@ -678,7 +680,7 @@ func (b *StatusNode) StartLocalNotifications() error {
err := b.localNotificationsSrvc.Start() err := b.localNotificationsSrvc.Start()
if err != nil { if err != nil {
b.log.Error("LocalNotifications service start failed on StartLocalNotifications", "error", err) b.logger.Error("LocalNotifications service start failed on StartLocalNotifications", zap.Error(err))
return nil return nil
} }
} }
@ -686,7 +688,7 @@ func (b *StatusNode) StartLocalNotifications() error {
err := b.localNotificationsSrvc.SubscribeWallet(&b.walletFeed) err := b.localNotificationsSrvc.SubscribeWallet(&b.walletFeed)
if err != nil { if err != nil {
b.log.Error("LocalNotifications service could not subscribe to wallet on StartLocalNotifications", "error", err) b.logger.Error("LocalNotifications service could not subscribe to wallet on StartLocalNotifications", zap.Error(err))
return nil return nil
} }

View File

@ -11,15 +11,16 @@ import (
"strings" "strings"
"time" "time"
"go.uber.org/zap"
validator "gopkg.in/go-playground/validator.v9" validator "gopkg.in/go-playground/validator.v9"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/p2p/discv5" "github.com/ethereum/go-ethereum/p2p/discv5"
"github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/params"
"github.com/status-im/status-go/eth-node/crypto" "github.com/status-im/status-go/eth-node/crypto"
"github.com/status-im/status-go/eth-node/types" "github.com/status-im/status-go/eth-node/types"
"github.com/status-im/status-go/logutils"
"github.com/status-im/status-go/static" "github.com/status-im/status-go/static"
wakucommon "github.com/status-im/status-go/waku/common" wakucommon "github.com/status-im/status-go/waku/common"
wakuv2common "github.com/status-im/status-go/wakuv2/common" wakuv2common "github.com/status-im/status-go/wakuv2/common"
@ -409,8 +410,6 @@ type NodeConfig struct {
// handshake phase, counted separately for inbound and outbound connections. // handshake phase, counted separately for inbound and outbound connections.
MaxPendingPeers int MaxPendingPeers int
log log.Logger
// LogEnabled enables the logger // LogEnabled enables the logger
LogEnabled bool `json:"LogEnabled"` LogEnabled bool `json:"LogEnabled"`
@ -807,7 +806,7 @@ func (c *NodeConfig) setDefaultPushNotificationsServers() error {
// If empty load defaults from the fleet // If empty load defaults from the fleet
if len(c.ClusterConfig.PushNotificationsServers) == 0 { if len(c.ClusterConfig.PushNotificationsServers) == 0 {
log.Debug("empty push notification servers, setting", "fleet", c.ClusterConfig.Fleet) logutils.ZapLogger().Debug("empty push notification servers, setting", zap.String("fleet", c.ClusterConfig.Fleet))
defaultConfig := &NodeConfig{} defaultConfig := &NodeConfig{}
err := loadConfigFromAsset(fmt.Sprintf("../config/cli/fleet-%s.json", c.ClusterConfig.Fleet), defaultConfig) err := loadConfigFromAsset(fmt.Sprintf("../config/cli/fleet-%s.json", c.ClusterConfig.Fleet), defaultConfig)
if err != nil { if err != nil {
@ -818,7 +817,7 @@ func (c *NodeConfig) setDefaultPushNotificationsServers() error {
// If empty set the default servers // If empty set the default servers
if len(c.ShhextConfig.DefaultPushNotificationsServers) == 0 { if len(c.ShhextConfig.DefaultPushNotificationsServers) == 0 {
log.Debug("setting default push notification servers", "cluster servers", c.ClusterConfig.PushNotificationsServers) logutils.ZapLogger().Debug("setting default push notification servers", zap.Strings("cluster servers", c.ClusterConfig.PushNotificationsServers))
for _, pk := range c.ClusterConfig.PushNotificationsServers { for _, pk := range c.ClusterConfig.PushNotificationsServers {
keyBytes, err := hex.DecodeString("04" + pk) keyBytes, err := hex.DecodeString("04" + pk)
if err != nil { if err != nil {
@ -929,7 +928,6 @@ func NewNodeConfig(dataDir string, networkID uint64) (*NodeConfig, error) {
MaxPeers: 25, MaxPeers: 25,
MaxPendingPeers: 0, MaxPendingPeers: 0,
IPCFile: "geth.ipc", IPCFile: "geth.ipc",
log: log.New("package", "status-go/params.NodeConfig"),
LogFile: "", LogFile: "",
LogLevel: "ERROR", LogLevel: "ERROR",
NoDiscovery: true, NoDiscovery: true,
@ -1159,7 +1157,6 @@ func (c *NodeConfig) Save() error {
return err return err
} }
c.log.Info("config file saved", "path", configFilePath)
return nil return nil
} }

View File

@ -3,12 +3,13 @@ package peers
import ( import (
"github.com/syndtr/goleveldb/leveldb" "github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/util" "github.com/syndtr/goleveldb/leveldb/util"
"go.uber.org/zap"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/p2p/discv5" "github.com/ethereum/go-ethereum/p2p/discv5"
"github.com/ethereum/go-ethereum/p2p/enode" "github.com/ethereum/go-ethereum/p2p/enode"
"github.com/status-im/status-go/db" "github.com/status-im/status-go/db"
"github.com/status-im/status-go/logutils"
) )
// NewCache returns instance of PeersDatabase // NewCache returns instance of PeersDatabase
@ -55,7 +56,7 @@ func (d *Cache) GetPeersRange(topic discv5.Topic, limit int) (nodes []*discv5.No
node := discv5.Node{} node := discv5.Node{}
value := iterator.Value() value := iterator.Value()
if err := node.UnmarshalText(value); err != nil { if err := node.UnmarshalText(value); err != nil {
log.Error("can't unmarshal node", "value", value, "error", err) logutils.ZapLogger().Error("can't unmarshal node", zap.Binary("value", value), zap.Error(err))
continue continue
} }
nodes = append(nodes, &node) nodes = append(nodes, &node)

View File

@ -6,14 +6,16 @@ import (
"sync" "sync"
"time" "time"
"go.uber.org/zap"
"github.com/ethereum/go-ethereum/event" "github.com/ethereum/go-ethereum/event"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/p2p" "github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/p2p/discv5" "github.com/ethereum/go-ethereum/p2p/discv5"
"github.com/ethereum/go-ethereum/p2p/enode" "github.com/ethereum/go-ethereum/p2p/enode"
"github.com/status-im/status-go/common" "github.com/status-im/status-go/common"
"github.com/status-im/status-go/discovery" "github.com/status-im/status-go/discovery"
"github.com/status-im/status-go/logutils"
"github.com/status-im/status-go/params" "github.com/status-im/status-go/params"
"github.com/status-im/status-go/peers/verifier" "github.com/status-im/status-go/peers/verifier"
"github.com/status-im/status-go/signal" "github.com/status-im/status-go/signal"
@ -205,7 +207,7 @@ func (p *PeerPool) stopDiscovery(server *p2p.Server) {
} }
if err := p.discovery.Stop(); err != nil { if err := p.discovery.Stop(); err != nil {
log.Error("discovery errored when stopping", "err", err) logutils.ZapLogger().Error("discovery errored when stopping", zap.Error(err))
} }
for _, t := range p.topics { for _, t := range p.topics {
t.StopSearch(server) t.StopSearch(server)
@ -224,7 +226,7 @@ func (p *PeerPool) restartDiscovery(server *p2p.Server) error {
if err := p.startDiscovery(); err != nil { if err := p.startDiscovery(); err != nil {
return err return err
} }
log.Debug("restarted discovery from peer pool") logutils.ZapLogger().Debug("restarted discovery from peer pool")
} }
for _, t := range p.topics { for _, t := range p.topics {
if !t.BelowMin() || t.SearchRunning() { if !t.BelowMin() || t.SearchRunning() {
@ -232,7 +234,7 @@ func (p *PeerPool) restartDiscovery(server *p2p.Server) error {
} }
err := t.StartSearch(server) err := t.StartSearch(server)
if err != nil { if err != nil {
log.Error("search failed to start", "error", err) logutils.ZapLogger().Error("search failed to start", zap.Error(err))
} }
} }
return nil return nil
@ -283,15 +285,15 @@ func (p *PeerPool) handleServerPeers(server *p2p.Server, events <-chan *p2p.Peer
select { select {
case <-p.quit: case <-p.quit:
log.Debug("stopping DiscV5 because of quit") logutils.ZapLogger().Debug("stopping DiscV5 because of quit")
p.stopDiscovery(server) p.stopDiscovery(server)
return return
case <-timeout: case <-timeout:
log.Info("DiscV5 timed out") logutils.ZapLogger().Info("DiscV5 timed out")
p.stopDiscovery(server) p.stopDiscovery(server)
case <-retryDiscv5: case <-retryDiscv5:
if err := p.restartDiscovery(server); err != nil { if err := p.restartDiscovery(server); err != nil {
log.Error("starting discv5 failed", "error", err, "retry", discoveryRestartTimeout) logutils.ZapLogger().Error("starting discv5 failed", zap.Duration("retry", discoveryRestartTimeout), zap.Error(err))
queueRetry(discoveryRestartTimeout) queueRetry(discoveryRestartTimeout)
} }
case <-stopDiscv5: case <-stopDiscv5:
@ -320,12 +322,12 @@ func (p *PeerPool) handlePeerEventType(server *p2p.Server, event *p2p.PeerEvent,
var shouldStop bool var shouldStop bool
switch event.Type { switch event.Type {
case p2p.PeerEventTypeDrop: case p2p.PeerEventTypeDrop:
log.Debug("confirm peer dropped", "ID", event.Peer) logutils.ZapLogger().Debug("confirm peer dropped", zap.Stringer("ID", event.Peer))
if p.handleDroppedPeer(server, event.Peer) { if p.handleDroppedPeer(server, event.Peer) {
shouldRetry = true shouldRetry = true
} }
case p2p.PeerEventTypeAdd: // skip other events case p2p.PeerEventTypeAdd: // skip other events
log.Debug("confirm peer added", "ID", event.Peer) logutils.ZapLogger().Debug("confirm peer added", zap.Stringer("ID", event.Peer))
p.handleAddedPeer(server, event.Peer) p.handleAddedPeer(server, event.Peer)
shouldStop = true shouldStop = true
default: default:
@ -366,7 +368,7 @@ func (p *PeerPool) handleStopTopics(server *p2p.Server) {
} }
} }
if p.allTopicsStopped() { if p.allTopicsStopped() {
log.Debug("closing discv5 connection because all topics reached max limit") logutils.ZapLogger().Debug("closing discv5 connection because all topics reached max limit")
p.stopDiscovery(server) p.stopDiscovery(server)
} }
} }
@ -393,10 +395,10 @@ func (p *PeerPool) handleDroppedPeer(server *p2p.Server, nodeID enode.ID) (any b
if confirmed { if confirmed {
newPeer := t.AddPeerFromTable(server) newPeer := t.AddPeerFromTable(server)
if newPeer != nil { if newPeer != nil {
log.Debug("added peer from local table", "ID", newPeer.ID) logutils.ZapLogger().Debug("added peer from local table", zap.Stringer("ID", newPeer.ID))
} }
} }
log.Debug("search", "topic", t.Topic(), "below min", t.BelowMin()) logutils.ZapLogger().Debug("search", zap.String("topic", string(t.Topic())), zap.Bool("below min", t.BelowMin()))
if t.BelowMin() && !t.SearchRunning() { if t.BelowMin() && !t.SearchRunning() {
any = true any = true
} }
@ -415,7 +417,7 @@ func (p *PeerPool) Stop() {
case <-p.quit: case <-p.quit:
return return
default: default:
log.Debug("started closing peer pool") logutils.ZapLogger().Debug("started closing peer pool")
close(p.quit) close(p.quit)
} }
p.serverSubscription.Unsubscribe() p.serverSubscription.Unsubscribe()

View File

@ -3,11 +3,13 @@ package peers
import ( import (
"sync" "sync"
"github.com/ethereum/go-ethereum/log" "go.uber.org/zap"
"github.com/ethereum/go-ethereum/p2p/discv5" "github.com/ethereum/go-ethereum/p2p/discv5"
"github.com/status-im/status-go/common" "github.com/status-im/status-go/common"
"github.com/status-im/status-go/discovery" "github.com/status-im/status-go/discovery"
"github.com/status-im/status-go/logutils"
) )
// Register manages register topic queries // Register manages register topic queries
@ -34,9 +36,9 @@ func (r *Register) Start() error {
r.wg.Add(1) r.wg.Add(1)
go func(t discv5.Topic) { go func(t discv5.Topic) {
defer common.LogOnPanic() defer common.LogOnPanic()
log.Debug("v5 register topic", "topic", t) logutils.ZapLogger().Debug("v5 register topic", zap.String("topic", string(t)))
if err := r.discovery.Register(string(t), r.quit); err != nil { if err := r.discovery.Register(string(t), r.quit); err != nil {
log.Error("error registering topic", "topic", t, "error", err) logutils.ZapLogger().Error("error registering topic", zap.String("topic", string(t)), zap.Error(err))
} }
r.wg.Done() r.wg.Done()
}(topic) }(topic)
@ -55,6 +57,6 @@ func (r *Register) Stop() {
default: default:
close(r.quit) close(r.quit)
} }
log.Debug("waiting for register queries to exit") logutils.ZapLogger().Debug("waiting for register queries to exit")
r.wg.Wait() r.wg.Wait()
} }

View File

@ -6,13 +6,15 @@ import (
"sync/atomic" "sync/atomic"
"time" "time"
"github.com/ethereum/go-ethereum/log" "go.uber.org/zap"
"github.com/ethereum/go-ethereum/p2p" "github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/p2p/discv5" "github.com/ethereum/go-ethereum/p2p/discv5"
"github.com/ethereum/go-ethereum/p2p/enode" "github.com/ethereum/go-ethereum/p2p/enode"
"github.com/status-im/status-go/common" "github.com/status-im/status-go/common"
"github.com/status-im/status-go/discovery" "github.com/status-im/status-go/discovery"
"github.com/status-im/status-go/logutils"
"github.com/status-im/status-go/params" "github.com/status-im/status-go/params"
) )
@ -315,7 +317,7 @@ func (t *TopicPool) ConfirmAdded(server *p2p.Server, nodeID enode.ID) {
peerInfoItem, ok := t.pendingPeers[nodeID] peerInfoItem, ok := t.pendingPeers[nodeID]
inbound := !ok || !peerInfoItem.added inbound := !ok || !peerInfoItem.added
log.Debug("peer added event", "peer", nodeID.String(), "inbound", inbound) logutils.ZapLogger().Debug("peer added event", zap.Stringer("peer", nodeID), zap.Bool("inbound", inbound))
if inbound { if inbound {
return return
@ -326,13 +328,13 @@ func (t *TopicPool) ConfirmAdded(server *p2p.Server, nodeID enode.ID) {
// established connection means that the node // established connection means that the node
// is a viable candidate for a connection and can be cached // is a viable candidate for a connection and can be cached
if err := t.cache.AddPeer(peer.node, t.topic); err != nil { if err := t.cache.AddPeer(peer.node, t.topic); err != nil {
log.Error("failed to persist a peer", "error", err) logutils.ZapLogger().Error("failed to persist a peer", zap.Error(err))
} }
t.movePeerFromPoolToConnected(nodeID) t.movePeerFromPoolToConnected(nodeID)
// if the upper limit is already reached, drop this peer // if the upper limit is already reached, drop this peer
if len(t.connectedPeers) > t.limits.Max { if len(t.connectedPeers) > t.limits.Max {
log.Debug("max limit is reached drop the peer", "ID", nodeID, "topic", t.topic) logutils.ZapLogger().Debug("max limit is reached drop the peer", zap.Stringer("ID", nodeID), zap.String("topic", string(t.topic)))
peer.dismissed = true peer.dismissed = true
t.removeServerPeer(server, peer) t.removeServerPeer(server, peer)
return return
@ -364,7 +366,7 @@ func (t *TopicPool) ConfirmDropped(server *p2p.Server, nodeID enode.ID) bool {
return false return false
} }
log.Debug("disconnect", "ID", nodeID, "dismissed", peer.dismissed) logutils.ZapLogger().Debug("disconnect", zap.Stringer("ID", nodeID), zap.Bool("dismissed", peer.dismissed))
delete(t.connectedPeers, nodeID) delete(t.connectedPeers, nodeID)
// Peer was removed by us because exceeded the limit. // Peer was removed by us because exceeded the limit.
@ -382,7 +384,7 @@ func (t *TopicPool) ConfirmDropped(server *p2p.Server, nodeID enode.ID) bool {
t.removeServerPeer(server, peer) t.removeServerPeer(server, peer)
if err := t.cache.RemovePeer(nodeID, t.topic); err != nil { if err := t.cache.RemovePeer(nodeID, t.topic); err != nil {
log.Error("failed to remove peer from cache", "error", err) logutils.ZapLogger().Error("failed to remove peer from cache", zap.Error(err))
} }
// As we removed a peer, update a sync strategy if needed. // As we removed a peer, update a sync strategy if needed.
@ -437,7 +439,7 @@ func (t *TopicPool) StartSearch(server *p2p.Server) error {
lookup := make(chan bool, 10) // sufficiently buffered channel, just prevents blocking because of lookup lookup := make(chan bool, 10) // sufficiently buffered channel, just prevents blocking because of lookup
for _, peer := range t.cache.GetPeersRange(t.topic, 5) { for _, peer := range t.cache.GetPeersRange(t.topic, 5) {
log.Debug("adding a peer from cache", "peer", peer) logutils.ZapLogger().Debug("adding a peer from cache", zap.Stringer("peer", peer))
found <- peer found <- peer
} }
@ -445,7 +447,7 @@ func (t *TopicPool) StartSearch(server *p2p.Server) error {
go func() { go func() {
defer common.LogOnPanic() defer common.LogOnPanic()
if err := t.discovery.Discover(string(t.topic), t.period, found, lookup); err != nil { if err := t.discovery.Discover(string(t.topic), t.period, found, lookup); err != nil {
log.Error("error searching foro", "topic", t.topic, "err", err) logutils.ZapLogger().Error("error searching foro", zap.String("topic", string(t.topic)), zap.Error(err))
} }
t.discWG.Done() t.discWG.Done()
}() }()
@ -471,7 +473,7 @@ func (t *TopicPool) handleFoundPeers(server *p2p.Server, found <-chan *discv5.No
continue continue
} }
if err := t.processFoundNode(server, node); err != nil { if err := t.processFoundNode(server, node); err != nil {
log.Error("failed to process found node", "node", node, "error", err) logutils.ZapLogger().Error("failed to process found node", zap.Stringer("node", node), zap.Error(err))
} }
} }
} }
@ -493,7 +495,7 @@ func (t *TopicPool) processFoundNode(server *p2p.Server, node *discv5.Node) erro
nodeID := enode.PubkeyToIDV4(pk) nodeID := enode.PubkeyToIDV4(pk)
log.Debug("peer found", "ID", nodeID, "topic", t.topic) logutils.ZapLogger().Debug("peer found", zap.Stringer("ID", nodeID), zap.String("topic", string(t.topic)))
// peer is already connected so update only discoveredTime // peer is already connected so update only discoveredTime
if peer, ok := t.connectedPeers[nodeID]; ok { if peer, ok := t.connectedPeers[nodeID]; ok {
@ -510,9 +512,9 @@ func (t *TopicPool) processFoundNode(server *p2p.Server, node *discv5.Node) erro
publicKey: pk, publicKey: pk,
}) })
} }
log.Debug( logutils.ZapLogger().Debug(
"adding peer to a server", "peer", node.ID.String(), "adding peer to a server", zap.Stringer("peer", node.ID),
"connected", len(t.connectedPeers), "max", t.maxCachedPeers) zap.Int("connected", len(t.connectedPeers)), zap.Int("max", t.maxCachedPeers))
// This can happen when the monotonic clock is not precise enough and // This can happen when the monotonic clock is not precise enough and
// multiple peers gets added at the same clock time, resulting in all // multiple peers gets added at the same clock time, resulting in all
@ -525,7 +527,7 @@ func (t *TopicPool) processFoundNode(server *p2p.Server, node *discv5.Node) erro
// This has been reported on windows builds // This has been reported on windows builds
// only https://github.com/status-im/nim-status-client/issues/522 // only https://github.com/status-im/nim-status-client/issues/522
if t.pendingPeers[nodeID] == nil { if t.pendingPeers[nodeID] == nil {
log.Debug("peer added has just been removed", "peer", nodeID) logutils.ZapLogger().Debug("peer added has just been removed", zap.Stringer("peer", nodeID))
return nil return nil
} }
@ -570,7 +572,7 @@ func (t *TopicPool) StopSearch(server *p2p.Server) {
return return
default: default:
} }
log.Debug("stoping search", "topic", t.topic) logutils.ZapLogger().Debug("stoping search", zap.String("topic", string(t.topic)))
close(t.quit) close(t.quit)
t.mu.Lock() t.mu.Lock()
if t.fastModeTimeoutCancel != nil { if t.fastModeTimeoutCancel != nil {

View File

@ -6,8 +6,10 @@ import (
hpprof "net/http/pprof" hpprof "net/http/pprof"
"time" "time"
"github.com/ethereum/go-ethereum/log" "go.uber.org/zap"
"github.com/status-im/status-go/common" "github.com/status-im/status-go/common"
"github.com/status-im/status-go/logutils"
) )
// Profiler runs and controls a HTTP pprof interface. // Profiler runs and controls a HTTP pprof interface.
@ -38,7 +40,7 @@ func NewProfiler(port int) *Profiler {
func (p *Profiler) Go() { func (p *Profiler) Go() {
go func() { go func() {
defer common.LogOnPanic() defer common.LogOnPanic()
log.Info("debug server stopped", "err", p.server.ListenAndServe()) logutils.ZapLogger().Info("debug server stopped", zap.Error(p.server.ListenAndServe()))
}() }()
log.Info("debug server started") logutils.ZapLogger().Info("debug server started")
} }

View File

@ -5,8 +5,10 @@ import (
"net/http" "net/http"
"time" "time"
"github.com/ethereum/go-ethereum/log" "go.uber.org/zap"
"github.com/status-im/status-go/images" "github.com/status-im/status-go/images"
"github.com/status-im/status-go/logutils"
) )
func DownloadAvatarAsset(url string) ([]byte, error) { func DownloadAvatarAsset(url string) ([]byte, error) {
@ -26,7 +28,7 @@ func DownloadAsset(url string) ([]byte, string, error) {
} }
defer func() { defer func() {
if err := res.Body.Close(); err != nil { if err := res.Body.Close(); err != nil {
log.Error("failed to close message asset http request body", "err", err) logutils.ZapLogger().Error("failed to close message asset http request body", zap.Error(err))
} }
}() }()

View File

@ -73,6 +73,7 @@ func (p *Publisher) Stop() {
} }
func (p *Publisher) tickerLoop() { func (p *Publisher) tickerLoop() {
defer gocommon.LogOnPanic()
ticker := time.NewTicker(tickerInterval * time.Second) ticker := time.NewTicker(tickerInterval * time.Second)
go func() { go func() {

View File

@ -581,7 +581,7 @@ func NewMessenger(
if c.wakuService != nil { if c.wakuService != nil {
c.wakuService.SetStatusTelemetryClient(telemetryClient) c.wakuService.SetStatusTelemetryClient(telemetryClient)
} }
go telemetryClient.Start(ctx) telemetryClient.Start(ctx)
} }
messenger = &Messenger{ messenger = &Messenger{
@ -916,7 +916,7 @@ func (m *Messenger) Start() (*MessengerResponse, error) {
for _, c := range controlledCommunities { for _, c := range controlledCommunities {
if c.Joined() && c.HasTokenPermissions() { if c.Joined() && c.HasTokenPermissions() {
go m.communitiesManager.StartMembersReevaluationLoop(c.ID(), false) m.communitiesManager.StartMembersReevaluationLoop(c.ID(), false)
} }
} }

View File

@ -24,6 +24,10 @@ import (
const minContactVerificationMessageLen = 1 const minContactVerificationMessageLen = 1
const maxContactVerificationMessageLen = 280 const maxContactVerificationMessageLen = 280
var (
ErrContactNotMutual = errors.New("must be a mutual contact")
)
func (m *Messenger) SendContactVerificationRequest(ctx context.Context, contactID string, challenge string) (*MessengerResponse, error) { func (m *Messenger) SendContactVerificationRequest(ctx context.Context, contactID string, challenge string) (*MessengerResponse, error) {
if len(challenge) < minContactVerificationMessageLen || len(challenge) > maxContactVerificationMessageLen { if len(challenge) < minContactVerificationMessageLen || len(challenge) > maxContactVerificationMessageLen {
return nil, errors.New("invalid verification request challenge length") return nil, errors.New("invalid verification request challenge length")
@ -31,7 +35,7 @@ func (m *Messenger) SendContactVerificationRequest(ctx context.Context, contactI
contact, ok := m.allContacts.Load(contactID) contact, ok := m.allContacts.Load(contactID)
if !ok || !contact.mutual() { if !ok || !contact.mutual() {
return nil, errors.New("must be a mutual contact") return nil, ErrContactNotMutual
} }
verifRequest := &verification.Request{ verifRequest := &verification.Request{
@ -138,7 +142,7 @@ func (m *Messenger) SendContactVerificationRequest(ctx context.Context, contactI
func (m *Messenger) GetVerificationRequestSentTo(ctx context.Context, contactID string) (*verification.Request, error) { func (m *Messenger) GetVerificationRequestSentTo(ctx context.Context, contactID string) (*verification.Request, error) {
_, ok := m.allContacts.Load(contactID) _, ok := m.allContacts.Load(contactID)
if !ok { if !ok {
return nil, errors.New("contact not found") return nil, ErrContactNotFound
} }
return m.verificationDatabase.GetLatestVerificationRequestSentTo(contactID) return m.verificationDatabase.GetLatestVerificationRequestSentTo(contactID)
@ -279,7 +283,7 @@ func (m *Messenger) AcceptContactVerificationRequest(ctx context.Context, id str
contact, ok := m.allContacts.Load(contactID) contact, ok := m.allContacts.Load(contactID)
if !ok || !contact.mutual() { if !ok || !contact.mutual() {
return nil, errors.New("must be a mutual contact") return nil, ErrContactNotMutual
} }
chat, ok := m.allChats.Load(contactID) chat, ok := m.allChats.Load(contactID)
@ -394,7 +398,7 @@ func (m *Messenger) VerifiedTrusted(ctx context.Context, request *requests.Verif
contact, ok := m.allContacts.Load(contactID) contact, ok := m.allContacts.Load(contactID)
if !ok || !contact.mutual() { if !ok || !contact.mutual() {
return nil, errors.New("must be a mutual contact") return nil, ErrContactNotMutual
} }
err = m.setTrustStatusForContact(context.Background(), contactID, verification.TrustStatusTRUSTED) err = m.setTrustStatusForContact(context.Background(), contactID, verification.TrustStatusTRUSTED)
@ -589,7 +593,7 @@ func (m *Messenger) DeclineContactVerificationRequest(ctx context.Context, id st
contact, ok := m.allContacts.Load(verifRequest.From) contact, ok := m.allContacts.Load(verifRequest.From)
if !ok || !contact.mutual() { if !ok || !contact.mutual() {
return nil, errors.New("must be a mutual contact") return nil, ErrContactNotMutual
} }
contactID := verifRequest.From contactID := verifRequest.From
contact, err = m.setContactVerificationStatus(contactID, VerificationStatusVERIFIED) contact, err = m.setContactVerificationStatus(contactID, VerificationStatusVERIFIED)
@ -686,7 +690,7 @@ func (m *Messenger) DeclineContactVerificationRequest(ctx context.Context, id st
func (m *Messenger) setContactVerificationStatus(contactID string, verificationStatus VerificationStatus) (*Contact, error) { func (m *Messenger) setContactVerificationStatus(contactID string, verificationStatus VerificationStatus) (*Contact, error) {
contact, ok := m.allContacts.Load(contactID) contact, ok := m.allContacts.Load(contactID)
if !ok || !contact.mutual() { if !ok || !contact.mutual() {
return nil, errors.New("must be a mutual contact") return nil, ErrContactNotMutual
} }
contact.VerificationStatus = verificationStatus contact.VerificationStatus = verificationStatus
@ -714,6 +718,11 @@ func (m *Messenger) setContactVerificationStatus(contactID string, verificationS
} }
func (m *Messenger) setTrustStatusForContact(ctx context.Context, contactID string, trustStatus verification.TrustStatus) error { func (m *Messenger) setTrustStatusForContact(ctx context.Context, contactID string, trustStatus verification.TrustStatus) error {
contact, ok := m.allContacts.Load(contactID)
if !ok {
return ErrContactNotFound
}
currentTime := m.getTimesource().GetCurrentTime() currentTime := m.getTimesource().GetCurrentTime()
err := m.verificationDatabase.SetTrustStatus(contactID, trustStatus, currentTime) err := m.verificationDatabase.SetTrustStatus(contactID, trustStatus, currentTime)
@ -721,6 +730,9 @@ func (m *Messenger) setTrustStatusForContact(ctx context.Context, contactID stri
return err return err
} }
contact.TrustStatus = trustStatus
m.allContacts.Store(contactID, contact)
return m.SyncTrustedUser(ctx, contactID, trustStatus, m.dispatchMessage) return m.SyncTrustedUser(ctx, contactID, trustStatus, m.dispatchMessage)
} }
@ -784,7 +796,7 @@ func (m *Messenger) HandleRequestContactVerification(state *ReceivedMessageState
contact := state.CurrentMessageState.Contact contact := state.CurrentMessageState.Contact
if !contact.mutual() { if !contact.mutual() {
m.logger.Debug("Received a verification request for a non added mutual contact", zap.String("contactID", contactID)) m.logger.Debug("Received a verification request for a non added mutual contact", zap.String("contactID", contactID))
return errors.New("must be a mutual contact") return ErrContactNotMutual
} }
persistedVR, err := m.verificationDatabase.GetVerificationRequest(id) persistedVR, err := m.verificationDatabase.GetVerificationRequest(id)
@ -875,7 +887,7 @@ func (m *Messenger) HandleAcceptContactVerification(state *ReceivedMessageState,
contact := state.CurrentMessageState.Contact contact := state.CurrentMessageState.Contact
if !contact.mutual() { if !contact.mutual() {
m.logger.Debug("Received a verification response for a non mutual contact", zap.String("contactID", contactID)) m.logger.Debug("Received a verification response for a non mutual contact", zap.String("contactID", contactID))
return errors.New("must be a mutual contact") return ErrContactNotMutual
} }
persistedVR, err := m.verificationDatabase.GetVerificationRequest(request.Id) persistedVR, err := m.verificationDatabase.GetVerificationRequest(request.Id)
@ -964,7 +976,7 @@ func (m *Messenger) HandleDeclineContactVerification(state *ReceivedMessageState
contact := state.CurrentMessageState.Contact contact := state.CurrentMessageState.Contact
if !contact.mutual() { if !contact.mutual() {
m.logger.Debug("Received a verification decline for a non mutual contact", zap.String("contactID", contactID)) m.logger.Debug("Received a verification decline for a non mutual contact", zap.String("contactID", contactID))
return errors.New("must be a mutual contact") return ErrContactNotMutual
} }
persistedVR, err := m.verificationDatabase.GetVerificationRequest(request.Id) persistedVR, err := m.verificationDatabase.GetVerificationRequest(request.Id)

View File

@ -769,3 +769,50 @@ func (s *MessengerVerificationRequests) newMessenger(shh types.Waku) *Messenger
s.Require().NoError(err) s.Require().NoError(err)
return messenger return messenger
} }
func (s *MessengerVerificationRequests) TestTrustStatus() {
theirMessenger := s.newMessenger(s.shh)
defer TearDownMessenger(&s.Suite, theirMessenger)
s.mutualContact(theirMessenger)
theirPk := types.EncodeHex(crypto.FromECDSAPub(&theirMessenger.identity.PublicKey))
// Test Mark as Trusted
err := s.m.MarkAsTrusted(context.Background(), theirPk)
s.Require().NoError(err)
contactFromCache, ok := s.m.allContacts.Load(theirPk)
s.Require().True(ok)
s.Require().Equal(verification.TrustStatusTRUSTED, contactFromCache.TrustStatus)
trustStatusFromDb, err := s.m.GetTrustStatus(theirPk)
s.Require().NoError(err)
s.Require().Equal(verification.TrustStatusTRUSTED, trustStatusFromDb)
// Test Remove Trust Mark
err = s.m.RemoveTrustStatus(context.Background(), theirPk)
s.Require().NoError(err)
contactFromCache, ok = s.m.allContacts.Load(theirPk)
s.Require().True(ok)
s.Require().Equal(verification.TrustStatusUNKNOWN, contactFromCache.TrustStatus)
trustStatusFromDb, err = s.m.GetTrustStatus(theirPk)
s.Require().NoError(err)
s.Require().Equal(verification.TrustStatusUNKNOWN, trustStatusFromDb)
// Test Mark as Untrustoworthy
err = s.m.MarkAsUntrustworthy(context.Background(), theirPk)
s.Require().NoError(err)
contactFromCache, ok = s.m.allContacts.Load(theirPk)
s.Require().True(ok)
s.Require().Equal(verification.TrustStatusUNTRUSTWORTHY, contactFromCache.TrustStatus)
trustStatusFromDb, err = s.m.GetTrustStatus(theirPk)
s.Require().NoError(err)
s.Require().Equal(verification.TrustStatusUNTRUSTWORTHY, trustStatusFromDb)
// Test calling with an unknown contact
err = s.m.MarkAsTrusted(context.Background(), "0x00000123")
s.Require().Error(err)
s.Require().Equal("contact not found", err.Error())
}

View File

@ -9,11 +9,10 @@ import (
"github.com/golang/protobuf/proto" "github.com/golang/protobuf/proto"
"go.uber.org/zap" "go.uber.org/zap"
"github.com/ethereum/go-ethereum/log"
"github.com/status-im/status-go/deprecation" "github.com/status-im/status-go/deprecation"
"github.com/status-im/status-go/eth-node/crypto" "github.com/status-im/status-go/eth-node/crypto"
"github.com/status-im/status-go/eth-node/types" "github.com/status-im/status-go/eth-node/types"
"github.com/status-im/status-go/logutils"
multiaccountscommon "github.com/status-im/status-go/multiaccounts/common" multiaccountscommon "github.com/status-im/status-go/multiaccounts/common"
"github.com/status-im/status-go/protocol/common" "github.com/status-im/status-go/protocol/common"
"github.com/status-im/status-go/protocol/protobuf" "github.com/status-im/status-go/protocol/protobuf"
@ -1337,7 +1336,7 @@ func (m *Messenger) publishSelfContactSubscriptions(event *SelfContactChangeEven
select { select {
case s <- event: case s <- event:
default: default:
log.Warn("self contact subscription channel full, dropping message") logutils.ZapLogger().Warn("self contact subscription channel full, dropping message")
} }
} }
} }

View File

@ -500,6 +500,8 @@ func (r *storeNodeRequest) shouldFetchNextPage(envelopesCount int) (bool, uint32
} }
func (r *storeNodeRequest) routine() { func (r *storeNodeRequest) routine() {
defer gocommon.LogOnPanic()
r.manager.logger.Info("starting store node request", r.manager.logger.Info("starting store node request",
zap.Any("requestID", r.requestID), zap.Any("requestID", r.requestID),
zap.String("pubsubTopic", r.pubsubTopic), zap.String("pubsubTopic", r.pubsubTopic),

View File

@ -10,14 +10,14 @@ import (
"time" "time"
"github.com/pkg/errors" "github.com/pkg/errors"
"go.uber.org/zap"
"github.com/ethereum/go-ethereum/log"
"github.com/mat/besticon/besticon" "github.com/mat/besticon/besticon"
"github.com/status-im/status-go/eth-node/crypto" "github.com/status-im/status-go/eth-node/crypto"
"github.com/status-im/status-go/images" "github.com/status-im/status-go/images"
userimage "github.com/status-im/status-go/images" userimage "github.com/status-im/status-go/images"
"github.com/status-im/status-go/logutils"
multiaccountscommon "github.com/status-im/status-go/multiaccounts/common" multiaccountscommon "github.com/status-im/status-go/multiaccounts/common"
"github.com/status-im/status-go/protocol/common" "github.com/status-im/status-go/protocol/common"
@ -1323,7 +1323,7 @@ func (db *sqlitePersistence) AddBookmark(bookmark browsers.Bookmark) (browsers.B
bookmark.ImageURL = icons[0].URL bookmark.ImageURL = icons[0].URL
} }
} else { } else {
log.Error("error getting the bookmark icon", "iconError", iconError) logutils.ZapLogger().Error("error getting the bookmark icon", zap.Error(iconError))
} }
_, err = insert.Exec(bookmark.URL, bookmark.Name, bookmark.ImageURL, bookmark.Removed, bookmark.Clock) _, err = insert.Exec(bookmark.URL, bookmark.Name, bookmark.ImageURL, bookmark.Removed, bookmark.Clock)

View File

@ -19,7 +19,7 @@ import (
) )
const encryptedPayloadKeyLength = 16 const encryptedPayloadKeyLength = 16
const defaultGorushURL = "https://gorush.status.im" const defaultGorushURL = "https://gorush.infra.status.im/"
var errUnhandledPushNotificationType = errors.New("unhandled push notification type") var errUnhandledPushNotificationType = errors.New("unhandled push notification type")

View File

@ -66,7 +66,7 @@ type CreateAccount struct {
// If you want to use non-default network, use NetworkID. // If you want to use non-default network, use NetworkID.
CurrentNetwork string `json:"currentNetwork"` CurrentNetwork string `json:"currentNetwork"`
NetworkID *uint64 `json:"networkId"` NetworkID *uint64 `json:"networkId"`
TestOverrideNetworks []params.Network `json:"-"` // This is used for testing purposes only TestOverrideNetworks []params.Network `json:"networksOverride"` // This is used for testing purposes only
TestNetworksEnabled bool `json:"testNetworksEnabled"` TestNetworksEnabled bool `json:"testNetworksEnabled"`

Some files were not shown because too many files have changed in this diff Show More