Refactor the activity filter session API to account for the new structure
Also:
- Refactor test helpers to mock for different chains with mixed answers
- Test implementation
Updates status-desktop #12120
Switch from the prototype of duplicating the SQL filter as a runtime
and keeping them in sync on each event that might invalidate the current
filtered entries to a simpler approach of requesting the filter again
and doing the diff to detect the new changes.
Also add a new reset API to model the new entries design requirements.
The new approach shows less corner-case to handle and follows one source
of truth concept making debugging and future maintenance easier.
Other changes
- Fix pending mocking to work with multiple calls
- Refactor tests to account for the new changes
Updates status-desktop #12120
This commit introduces the first steps towards implementing a session-based activity API to support dynamic updates of the current visualized filter in the wallet activity service. This change is necessary to move away from static paginated filtering, which was previously done in SQL, to a more dynamic approach that can handle updates in real-time.
The main changes include:
- Add basic `EventActivitySessionUpdated` support for pending transactions.
- Added a `TODO.md` file outlining the plan and requirements for dynamic activity updates.
- New session-related API to the `activity.Service`
- `session.go` contains the logic for session management and event processing related to activity updates.
- Add test case for incremental filter updates.
The commit also includes:
- various other minor changes and refactoring to support the new session-based approach.
- Deprecation notices added to the `api.go` file for methods that are no longer used by the status-desktop application.
- Clarification comments added to the `scheduler.go` file regarding replacement policies.
Updates: #12120
ghstack-source-id: a61ef74184
Pull Request resolved: https://github.com/status-im/status-go/pull/4480
The reading of the amount for pending transactions was done in the same
way as for transfers table. However, the transfers table has a string
hex representation of the amount, while the pending transactions table
has a binary representation of the amount (*big.Int). This was
triggering the not int warning and value was missing.
Updates status-desktop #12120
In case both to/from addresses are present in the list we were using
the same logic as for transfers. However, this doesn't make sense given
that we can have only one entry in pending activity.
The following cases are still covered
- When the receiver is in addresses we get received
- When both receiver and sender are in the list will get sent
- When the sender is on the list we will get sent
Updates status-desktop #12120
keep restarting on error non-stop, removed some tests accordingly.
Fixed a flaky test for loadBlocksAndTransfers command.
Added tests for async.AtomicGroup.
Made tranfersCommand FiniteCommandWithErrorCounter to prevent infinite
restart.
- favourite column removed from the saved_addresses table
- favourite property removed from the SavedAddress struct
- ens name removed from the primary key, the primary key now is composed of address and is_test columns
- ens parameter removed from wakuext_deleteSavedAddress
- wallet_getSavedAddresses moved to wakuext_getSavedAddresses (to keep them all in a single place)
- saved addresses related endpoints removed from the wallet service, even they logically belong there, a reason for that
is avoiding emitting sync message if one uses calls from the wallet service, while that's not the case in ext service. Once
we refactor this and introduce devices syncing mechanism in the wallet service, we should not only these but other wallet
related endpoints move there (removed: wallet_getSavedAddresses, wallet_addSavedAddress and wallet_deleteSavedAddress).
Affected area:
Saved addresses
Reading the Nonce from the local cache may be incorrect if the tx is made out of the Status app or
if Status app sends a tx prepared by the dapp (via WalletConnect). A submitted tx with a wrong Nonce
results in a failing tx, that's why we need to read the Nonce from the network.
mock it in tests.
Made a configurable timeout interval for Commander interface.
Added tests to verify loadBlocksAndTransfers command is stopped
correctly on max errors limit reached
proper handling of errors and commands restart.
Now:
- Infinite commands started only once and never restarted, stoped on
context.Done.
- Finite commands are joined into AtomicGroup to stop the rest in the
group in case one command fails. Otherwise other commands in the group
will continue running and the failed command is not retried to
restart. Fixed goroutine leakage in case of failure of some commands
`getLogs` for multiple accounts simultaneously. For now only used for
new transfers detection. Detection of `new` transfers has been changed,
now they are searched from head and forward. Previously they were
searched from last scanned block forward.
- `WalletConnectTransfer` identified as a new transfer type
- Wallet-related endpoints that logically belong to the wallet moved from the wallet connect service
- Wallet connect service now receives `transfer.TransactionManager` instead of `transactions.Transactor`
- Deadlock issue when trying to send the tx with the wrong nonce fixed
Add new APIs to track if valid pairings are available to be used
by application not to run WalletConnect SDK if not needed.
Closes status-desktop: #12794
Refactored transfers loading to reduce blockchain RPC requests (getBaseFee, getTransaction,
getTransactionReceipt) by reusing preloaded transaction and block fee.
Split extraction of subtransaction from logs and from ETH transfer into
different methods.
Refactored log_parser to extract sender and receiver addresses
uniformly for different transfer types.
Replaced info logs with debug where needed.
closes#4221
This functionality is needed in case the user wants to send a transaction and
signs it using the signature provided by the keycard (or any other compatible way).
- use `appdatabse.DbInitializer{}` in tests to ensure consistent migrations
- remove protocol's open database functions due to improper
initialization caused by missing node config migration
- introduce `PushNotificationServerConfig` to resolve cyclic dependency
issues
The activity type filtering was not stable in relation to addresses
filter which was generating unexpected Send/Receive type in the
corner-case when both sender and receiver was in the address list.
Updates status-desktop #11960
Brings consistency in case when sender and receiver are both in the
filter address list. This fixes the case of sender and receiver in
addresses and filters out duplicate entries.
Also
- refactor tests to provide support for owners
- adapt TestGetActivityEntriesWithSameTransactionForSenderAndReceiverInDB
to the use of owner instead of from
Trigger async fetching of extra information on each activity filtering
request. Only emit the update event for incomplete entries.
Other changes:
- Make DataEntry light as event payload by making all the fields
optional
- Add new required fields to the activity DataEntry
- Add collectibles.ManagerInterface to aid testing
Note: this PR keeps compatibility with current master by always
providing non-optional multi-transaction ID. The TODO will be executed
before merging the status-desktop PR.
Experienced a hang on FetchAssetsByCollectibleUniqueID call with:
[{{5 0x21263a042aFE4bAE34F08Bb318056C181bD96D3b} 1209},
{{5 0x9A95631794a42d30C47f214fBe02A72585df35e1} 237},
{{5 0x9A95631794a42d30C47f214fBe02A72585df35e1} 236},
{{5 0x9A95631794a42d30C47f214fBe02A72585df35e1} 832},
{{5 0x9A95631794a42d30C47f214fBe02A72585df35e1} 830},
{{5 0x9A95631794a42d30C47f214fBe02A72585df35e1} 853}]
Updates status-desktop #11597
Balance history was not checked for all chains if no history on
some chain.
Removed `SetInitialRange` from wallet API as internal implementation.
This method was called on adding a brand new Status account to initialize
blocks_range table to avoid transfers history checks.
Mainly refactor API to have control on pending_transactions operations.
Use the new API to migrate the multi-transaction ID from to transfers
in one SQL transaction.
The refactoring was done to better mirror the purpose of pending_transactions
Also:
- Externalize TransactionManager from WalletService to be used by
other services
- Extract walletEvent as a dependency for all services that need to
propagate events
- Batch chain requests
- Remove unused APIs
- Add auto delete option for clients that fire and forget transactions
Updates status-desktop #11754
Added a unit test for changing app and wallet DBs passwords.
Refactored geth_backend to simplify and allow wallet db password changing.
Fixed opening database with wrong password.
interface for initializing db, which is implemented for appdatabase and
walletdatabase. TBD for multiaccounts DB.
Unified DB initializion for all tests using helpers and new interface.
Reduced sqlcipher kdf iterations for all tests to 1.
The only place where appDB is used in wallet is activity,
which refers to `keycards_accounts` table. So a temporary
table `keycards_accounts` is created in wallet db and updated
before each activity query.
Adding new smart contracts and generated go files.
Deploy token owner function and master token address getter.
Adding deployer and privilegesLevel columns to community_tokens table.
Passing addressFrom to API calls.
Issue #11250
1) Only fetch history for networks that match test mode enabled
* Trade Off: it will be only refetch in 12 hours so changing test mode won't trigger a refetch until app is restarted or 12 hours.
I think it is ok as change test mode is not a common use case
2) Do not consider networks that are enabled or not as this can be change more often than every 12 hours
Add wallet events feed to TransactionManager and send pending changed
events on add and delete
Also
- Remove hardcoded values in the filter query
- Small improvement to query
Updates status-desktop #11233
This fixes returning address identity after that was changed by
the real to/from fix. Previously the address was wrongly used as from
Updates status-desktop #11233
Added test to validate the fix
Also change JOIN to LEFT JOIN between `multi_transactions` and `transactions` table so that we return all entries from `multi_transactions` table even if there is no matching entry in `transactions` table, which might be the case for multi transactions that that could not be detected. This will postpone the error case until we get into details of the multi transaction.
Updates status-desktop #11233
Implement activity.Scheduler to serialize and limit the number of
calls on the activity service. This way we protect form inefficient
parallel queries and easy support async and rate limiting based on the
API requirements.
Refactor the activity APIs async and use the Scheduler for managing
the activity service calls configured with one of the two rules: cancel
ignore.
Updates status-desktop #11170
Main changes:
- Refactor activity API to propagate token identities.
- Extend service to convert token identities to symbols for filtering
multi-transaction
- Filter transfers, pending_transactions and multi-transactions based
on the provided token identities
- Return involved token identities in activity API
- Test token filtering
Also:
- Fixed calling cancel on a filer activity completed task to release
resources
Notes:
- Found limitations with the token identity which complicates things
by not allowing to filter by token groups (like token-code does)
Updates status-desktop #11025
The new API returns all known recipients of a wallet, by
sourcing transfers, pending_transactions and multi_transactions tables
The API is synchronous. Future work will be to make it async.
In some corner cases, when watching a famous wallet, it can
be that there are too many recipients to be returned in one go. Offset
and limit can be used to paginate through the results.
Updates status-desktop #10025
Change smart contract with new API.
Update gas amount for deployment.
Add Burn() and EstimateBurn() functions.
Add RemainingSupply() functions.
Issue #10816
fetch strategy:
Before:
- block fetching commands for different accounts were in the same wait
group, making them dependent on each iteration.
- transfers loading command was checking database for new unloaded
blocks on timeout and was in the same wait group with block fetching, so
it was often blocked until all block fetching commands finish for
iteration.
Now:
- block fetching commands run independently for each account
- transfers fetching command is run once on startup for unloaded blocks
from DB
- fetch history blocks commands are launched once on startup for
accounts with no full history loaded
- transfers are loaded on each iteration of block range check
without waiting for all ranges to be checked
Refactor the filter interface to be an async call which returns
the result using a wallet event
A call to the filter API will cancel the ongoing filter and receive
an error result event
Closes status-desktop #10994
Main changes:
- extend DB query include status for transactions and pending_transactions
- extend DB query to aggregate data from complex activity entries
- extend tests to include the new status values
Other changes:
- Improve tests with mocked addresses in DB to have a true to/from state
Update status-desktop #10746
Extended the migration process with a generic way of applying custom
migration code on top of the SQL files. The implementation provides
a safer way to run GO code along with the SQL migrations and possibility
of rolling back the changes in case of failure to keep the database
consistent.
This custom GO migration is needed to extract the status from
the JSON blob receipt and store it in transfers table.
Other changes:
- Add NULL DB value tracking to JSONBlob helper
- Index status column on transfers table
- Remove unnecessary panic calls
- Move log_parser to wallet's common package and use to extract token
identity from the logs
Notes:
- there is already an index on transfers table, sqlite creates one for
each unique constraint therefore add only status to a new index
- the planned refactoring and improvements to the database have been
postponed due to time constraints. Got the time to migrate the data
though, extracting it can be done later for a more efficient
implementation
Update status-desktop #10746