This commit adds new tables to the database and APIs in `Messenger` and
communities `Manager` to store `CheckChannelPermissionsResponse`s.
The responses are stored whenever channel permissions have been checked.
The reason we're doing this is so that clients can retrieve the last
known channel permission state before waiting for onchain checks to
finish.
Main changes:
- Refactor activity API to propagate token identities.
- Extend service to convert token identities to symbols for filtering
multi-transaction
- Filter transfers, pending_transactions and multi-transactions based
on the provided token identities
- Return involved token identities in activity API
- Test token filtering
Also:
- Fixed calling cancel on a filer activity completed task to release
resources
Notes:
- Found limitations with the token identity which complicates things
by not allowing to filter by token groups (like token-code does)
Updates status-desktop #11025
Improve the error management in order to avoid DB corruption in case the process is killed while encrypting the DB.
Changes:
Use sqlcipher_export instead of rekey to change the DB password. The advantage is that sqlcipher_export will operate on a new DB file and we don't need to modify the current account unless the export is successful.
Keeping the rekey requires to create a DB copy before trying to re-encrypt the DB, but the DB copy is risky in case the DB file changes wile the copy is in progress. It could also lead to DB corruption.
* perf(sqlCipher): Increase cipher page size to 8192
Increasing the cipher page size to 8192 requires DB re-encryption. The process is as follows:
//Login to v3 DB
PRAGMA key = 'key';
PRAGMA cipher_page_size = 1024"; // old Page size
PRAGMA cipher_hmac_algorithm = HMAC_SHA1";
PRAGMA cipher_kdf_algorithm = PBKDF2_HMAC_SHA1";
PRAGMA kdf_iter = kdfIterationsNumber";
//Create V4 DB with increased page size
ATTACH DATABASE 'newdb.db' AS newdb KEY 'key';
PRAGMA newdb.cipher_page_size = 8192; // new Page size
PRAGMA newdb.cipher_hmac_algorithm = HMAC_SHA1"; // same as in v3
PRAGMA newdb.cipher_kdf_algorithm = PBKDF2_HMAC_SHA1"; // same as in v3
PRAGMA newdb.kdf_iter = kdfIterationsNumber"; // same as in v3
SELECT sqlcipher_export('newdb');
DETACH DATABASE newdb;
//Login to V4 DB
...
Worth noting:
The DB migration will happen on the first successful login.
The new DB version will have a different name to be able to distinguish between different DB versions.Versions naming mirrors sqlcipher major version (naming conventions used by sqlcipher), meaning that we're migrating from V3 to V4 DB (even if we're not fully aligned with V4 standards). The DB is not migrated to the v4 standard `SHA512` due to performance reasons. Our custom `SHA1` implementation is fully optimised for perfomance.
* perf(sqlCipher): Fixing failing tests
Update the new DB file format in Delete account, Change password and Decrypt database flows
* perf(SQLCipher): Increase page size - send events to notify when the DB re-encryption starts/ends
Extended the migration process with a generic way of applying custom
migration code on top of the SQL files. The implementation provides
a safer way to run GO code along with the SQL migrations and possibility
of rolling back the changes in case of failure to keep the database
consistent.
This custom GO migration is needed to extract the status from
the JSON blob receipt and store it in transfers table.
Other changes:
- Add NULL DB value tracking to JSONBlob helper
- Index status column on transfers table
- Remove unnecessary panic calls
- Move log_parser to wallet's common package and use to extract token
identity from the logs
Notes:
- there is already an index on transfers table, sqlite creates one for
each unique constraint therefore add only status to a new index
- the planned refactoring and improvements to the database have been
postponed due to time constraints. Got the time to migrate the data
though, extracting it can be done later for a more efficient
implementation
Update status-desktop #10746
Improve performance of queries for
- transfer.GetTransfersInRange
- transfer.GetTransfersByAddress
- transfer.GetTransfersByAddressAndBlock
- transfer.GetTransfers
- transfer.GetPreloadedTransactions
For 16952 entries worst case scenario tested with `sqlcipher`:
- Before: Run Time: real 0.897 user 0.728139 sys 0.166714
- After: Run Time: real 0.001 user 0.000437 sys 0.000189
A single composite index (with the default one) might work better though
- old `accounts` table is moved/mapped to `keypairs` and `keypairs_accounts`
- `keycards` table has foreign key which refers to `keypairs.key_uid`
- `Keypair` introduced as a new type
- api endpoints updated according to this change
This commit does a few things:
1) Extend create/import account endpoint to get wallet config, some of
which has been moved to the backend
2) Set up a loop for retrieving balances every 10 minutes, caching the
balances
3) Return information about which checks are not passing when trying to
join a token gated community
4) Add tests to the token gated communities
5) Fixes an issue with addresses not matching when checking for
permissions
The move to the wallet as a background task is not yet complete, I need
to publish a signal, and most likely I will disable it before merging
for now, as it's currently not used by desktop/mobile, but the PR was
getting to big
This is the initial implementation for the new URL unfurling requirements. The
most important one is that only the message sender will pay the privacy cost for
unfurling and extracting metadata from websites. Once the message is sent, the
unfurled data will be stored at the protocol level and receivers will just
profit and happily decode the metadata to render it.
Further development of this URL unfurling capability will be mostly guided by
issues created on clients. For the moment in status-mobile:
https://github.com/status-im/status-mobile/labels/url-preview
- https://github.com/status-im/status-mobile/issues/15918
- https://github.com/status-im/status-mobile/issues/15917
- https://github.com/status-im/status-mobile/issues/15910
- https://github.com/status-im/status-mobile/issues/15909
- https://github.com/status-im/status-mobile/issues/15908
- https://github.com/status-im/status-mobile/issues/15906
- https://github.com/status-im/status-mobile/issues/15905
### Terminology
In the code, I've tried to stick to the word "unfurl URL" to really mean the
process of extracting metadata from a website, sort of lower level. I use "link
preview" to mean a higher level structure which is enriched by unfurled data.
"link preview" is also how designers refer to it.
### User flows
1. Carol needs to see link previews while typing in the chat input field. Notice
from the diagram nothing is persisted and that status-go endpoints are
essentially stateless.
```
#+begin_src plantuml :results verbatim
Client->>Server: Call wakuext_getTextURLs
Server-->>Client: Normalized URLs
Client->>Client: Render cached unfurled URLs
Client->>Server: Unfurl non-cached URLs.\nCall wakuext_unfurlURLs
Server->>Website: Fetch metadata
Website-->>Server: Metadata (thumbnail URL, title, etc)
Server->>Website: Fetch thumbnail
Server->>Website: Fetch favicon
Website-->>Server: Favicon bytes
Website-->>Server: Thumbnail bytes
Server->>Server: Decode & process images
Server-->>Client: Unfurled data (thumbnail data URI, etc)
#+end_src
```
```
,------. ,------. ,-------.
|Client| |Server| |Website|
`--+---' `--+---' `---+---'
| Call wakuext_getTextURLs | |
| ---------------------------------------> |
| | |
| Normalized URLs | |
| <- - - - - - - - - - - - - - - - - - - - |
| | |
|----. | |
| | Render cached unfurled URLs | |
|<---' | |
| | |
| Unfurl non-cached URLs. | |
| Call wakuext_unfurlURLs | |
| ---------------------------------------> |
| | |
| | Fetch metadata |
| | ------------------------------------>
| | |
| | Metadata (thumbnail URL, title, etc)|
| | <- - - - - - - - - - - - - - - - - -
| | |
| | Fetch thumbnail |
| | ------------------------------------>
| | |
| | Fetch favicon |
| | ------------------------------------>
| | |
| | Favicon bytes |
| | <- - - - - - - - - - - - - - - - - -
| | |
| | Thumbnail bytes |
| | <- - - - - - - - - - - - - - - - - -
| | |
| |----. |
| | | Decode & process images |
| |<---' |
| | |
| Unfurled data (thumbnail data URI, etc)| |
| <- - - - - - - - - - - - - - - - - - - - |
,--+---. ,--+---. ,---+---.
|Client| |Server| |Website|
`------' `------' `-------'
```
2. Carol sends the text message with link previews in the RPC request
wakuext_sendChatMessages. status-go assumes the link previews are good
because it can't and shouldn't attempt to re-unfurl them.
```
#+begin_src plantuml :results verbatim
Client->>Server: Call wakuext_sendChatMessages
Server->>Server: Transform link previews to\nbe proto-marshalled
Server->DB: Write link previews serialized as JSON
Server-->>Client: Updated message response
#+end_src
```
```
,------. ,------. ,--.
|Client| |Server| |DB|
`--+---' `--+---' `+-'
| Call wakuext_sendChatMessages| |
| -----------------------------> |
| | |
| |----. |
| | | Transform link previews to |
| |<---' be proto-marshalled |
| | |
| | |
| | Write link previews serialized as JSON|
| | -------------------------------------->
| | |
| Updated message response | |
| <- - - - - - - - - - - - - - - |
,--+---. ,--+---. ,+-.
|Client| |Server| |DB|
`------' `------' `--'
```
3. The message was sent over waku and persisted locally in Carol's device. She
should now see the link previews in the chat history. There can be many link
previews shared by other chat members, therefore it is important to serve the
assets via the media server to avoid overloading the ReactNative bridge with
lots of big JSON payloads containing base64 encoded data URIs (maybe this
concern is meaningless for desktop). When a client is rendering messages with
link previews, they will have the field linkPreviews, and the thumbnail URL
will point to the local media server.
```
#+begin_src plantuml :results verbatim
Client->>Server: GET /link-preview/thumbnail (media server)
Server->>DB: Read from user_messages.unfurled_links
Server->Server: Unmarshal JSON
Server-->>Client: HTTP Content-Type: image/jpeg/etc
#+end_src
```
```
,------. ,------. ,--.
|Client| |Server| |DB|
`--+---' `--+---' `+-'
| GET /link-preview/thumbnail (media server)| |
| ------------------------------------------> |
| | |
| | Read from user_messages.unfurled_links|
| | -------------------------------------->
| | |
| |----. |
| | | Unmarshal JSON |
| |<---' |
| | |
| HTTP Content-Type: image/jpeg/etc | |
| <- - - - - - - - - - - - - - - - - - - - - |
,--+---. ,--+---. ,+-.
|Client| |Server| |DB|
`------' `------' `--'
```
### Some limitations of the current implementation
The following points will become separate issues in status-go that I'll work on
over the next couple weeks. In no order of importance:
- Improve how multiple links are fetched; retries on failure and testing how
unfurling behaves around the timeout limits (deterministically, not by making
real HTTP calls as I did). https://github.com/status-im/status-go/issues/3498
- Unfurl favicons and store them in the protobuf too.
- For this PR, I added unfurling support only for websites with OpenGraph
https://ogp.me/ meta tags. Other unfurlers will be implemented on demand. The
next one will probably be for oEmbed https://oembed.com/, the protocol
supported by YouTube, for example.
- Resize and/or compress thumbnails (and favicons). Often times, thumbnails are
huge for the purposes of link previews. There is already support for
compressing JPEGs in status-go, but I prefer to work with compression in a
separate PR because I'd like to also solve the problem for PNGs (probably
convert them to JPEGs, plus compress them). This would be a safe choice for
thumbnails, favicons not so much because transparency is desirable.
- Editing messages is not yet supported.
- I haven't coded any artificial limit on the number of previews or on the size
of the thumbnail payload. This will be done in a separate issue. I have heard
the ideal solution may be to split messages into smaller chunks of ~125 KiB
because of libp2p, but that might be too complicated at this stage of the
product (?).
- Link preview deletion.
- For the moment, OpenGraph metadata is extracted by requesting data for the
English language (and fallback to whatever is available). In the future, we'll
want to unfurl by respecting the user's local device language. Some websites,
like GoDaddy, are already localized based on the device's IP, but many aren't.
- The website's description text should be limited by a certain number of
characters, especially because it's outside our control. Exactly how much has
not been decided yet, so it'll be done separately.
- URL normalization can be tricky, so I implemented only the basics to help with
caching. For example, the url https://status.im and HTTPS://status.im are
considered identical. Also, a URL is considered valid for unfurling if its TLD
exists according to publicsuffix.EffectiveTLDPlusOne. This was essential,
otherwise the default Go url.Parse approach would consider many invalid URLs
valid, and thus the server would waste resources trying to unfurl the
unfurleable.
### Other requirements
- If the message is edited, the link previews should reflect the edited text,
not the original one. This has been aligned with the design team as well.
- If the website's thumbnail or the favicon can't be fetched, just ignore them.
The only mandatory piece of metadata is the website's title and URL.
- Link previews in clients should be generated in near real-time, that is, as
the user types, previews are updated. In mobile this performs very well, and
it's what other clients like WhatsApp, Telegram, and Facebook do.
### Decisions
- While the user typing in the input field, the client is constantly (debounced)
asking status-go to parse the text and extract normalized URLs and then the
client checks if they're already in its in-memory cache. If they are, no RPC
call is made. I chose this approach to achieve the best possible performance
in mobile and avoid the whole RPC overhead, since the chat experience is
already not smooth enough. The mobile client uses URLs as cache keys in a
hashmap, i.e. if the key is present, it means the preview is readily available
(naive, but good enough for now). This decision also gave me more flexibility
to find the best UX at this stage of the feature.
- Due to the requirement that users should be able to see independent loading
indicators for each link preview, when status-go can't unfurl a URL, it
doesn't return it in the response.
- As an initial implementation, I added the BLOB column unfurled_links to the
user_messages table. The preview data is then serialized as JSON before being
stored in this column. I felt that creating a separate table and the related
code for this initial PR would be inconvenient. Is that reasonable to you?
Once things stabilize I can create a proper table if we want to avoid this
kind of solution with serialized columns.
There were a couple of issues on how we handle pinned messages:
1) Clock of the message was only checked when saving, meaning that the
client would receive potentially updates that were not to be
processed.
2) We relied on the client to generate a notification for a pinned
message by sending a normal message through the wire. This PR changes
the behavior so that the notification is generated locally, either on
response to a network event or client event.
3) When deleting a message, we pull all the replies/pinned notifications
and send them over to the client so they know that those messages
needs updating.
- `keypair_name` added to `accounts` table, all accounts derived from the
same master key have the same keypair name and also no two keypairs share
the same keypair name (keypair name is unique per keypair)
- `last_used_derivation_index` added to `accounts` table, cause we need
to maintain the highest index been used for the derivations made within
the same keypair
`type` column is set for all rows to appropriate value. Before this change
accounts which were generated from the keypair created importing seed phrase
had `generated` value for the `type`.
According to above, a function for generating an account sets the `type`
based on the passed derive from address.
prefixes. Changed primary keys and API methods.
Fixed tests and added new ones.
Fixed saved addresses and transaction tests to use ':memory:' sqlite
DB instead of a tmp file to speed up testing by hundred of times.
Fixes#8599
`last_update_clock` will be used later for synchronization.
All keypair functions take clock value in consideration when
making a decision whether to perform an action or not.
Adds a new column named `deleted` to the table `activity_center_notifications`.
Related PR in Mobile https://github.com/status-im/status-mobile/pull/15106 for a lot more details of the feature.
Why? Relying on the `dismissed` column for soft deletion is no longer viable because the mobile & desktop clients should display dismissed notifications (sometimes), hence the need for a new column to truly represent soft deletion.
A migration was added out-of-order, which meant that in clients who
had already run the migration after, it would be skip.
This commit re-adds the migration so it's run, tested against an empty
account and one that had already migrated.
Due to easier maintaining in future the following is done:
- keypairs table removed
- keycard table added, storing only keycards/keypairs
- keycard_accounts table added, storing only accounts migrated to a keycard
Migration is done keeping the current keycard state accurate (no keycard records will be lost).
* feat: Add seen/unseen activity center state
* feat: ActivityCenterState for grouping ActivityCenter unread messages cnt and seen state
* feat: always use messenger's addActivityCenterNotification & add state to the response
* Remove unused activity center endpoints form api and fix test
Changes applied here introduce backing up profile data (display name and identity
images) to waku and fetch them from waku. Information about those data is sent
as a separate signal to a client via `sync.from.waku.profile` signal.
New signal `sync.from.waku.progress` is introduced which will be used to notify a client
about the progress of fetching data from waku.
This commit makes a few changes to the community history archive
download routine to make it more robust:
1. Prior to this commit, even when there were no archives to be
downloaded, we were still trying to extract messages from archive
data.
2. Logs have been improved as they were sometimes showing confusing
information
3. We now handle interruption of ongoing download + data import much
better in case of multiple magnetlinks being processed in roughly the
same time.
4. We now keep track of which archive has been successfully imported
into the database. Without this, Status would consider any downloaded
archives as "done" even though they haven't actually been imported
into the database yet. This way Status should be able to pick up its
work were it left of the last time, in case a user closes the app, or
another magnetlink interrupts the ongoing process.
* feat(ActivityCenter): Add community request AC notification
* feat(ActivityCenter): Add CommunityID to AC notification
* feat(ActivityCenter): Add membership status for community membership AC notifications
* feat(ActivityCenter): Add tests for community notifications and fix naming
* Add notification for kicked from community action
* feat(ActivityCenter): Fix for missing notification objects for tests
Main changes:
- Extend saved addresses DB with sync info: sync timestamp, update timestamp
and soft removed flag
- Create custom protobuf message payload to sync saved addresses
- Cleanup saved addresses on each start of messenger, by deleting
soft removed older entries
- Sync all saved addresses on Messenger.SyncDevices calls
- Sync particular changes to saved addresses
- Add SavedAddressManager instance to messenger
- Note, can't find a clean way to pass the SavedAddressManager to the
messenger, so we create another one
- Add tests for sync and new DB API
Closes: #7229
This adds a new `DiscordMessageAttachment` type which is part of
`DiscordMessage`. Along with that type, there's also a new database
table for `discord_message_attachments` and corresponding persistence
APIs.
This commit also changes how chat messages are retrieved.
Here's why:
`DiscordMessage` can have multiple `DiscordMessageAttachment`.
A chat message can have a `DiscordMessage`.
Because we're `LEFT JOIN`'ing the discord message attachments into the
chat messages, there's a possibility of multiple rows per message.
Hence, this commit ensures we collect queried discord message
attachments on chat messages.
Usually, message IDs are generated by their payload and signature and
in receiving nodes calculated in based on the same data as well.
There's no ID attached to messages in-flight.
This turns out to be a bit of a problem for messages that are being
imported from third party systems like discord, as the conversion
and saving of such messages and handling of their possible assets and
attachments are done in separate steps, which changes the message
payloads after their IDs have been generated.
Hence, we're introducing a `ThirdPartyID` property to `common.Message`
and `protobuf.WakuMessage` so receiving nodes of such messages (via the
archive protocol primarily) can easily detect third party/imported
messages and give them special treatment.
Restore the old, previously renamed, 1662447680_add_keypairs_table.up.sql
file while keeping the current one for those who already migrated to the
new one. The extra migration is noop and saves to keep consistency in
the user data states history.
Remove Favourites APIs and update the saved address APIs
Added up migration scripts that move the favourites from the old table
to the saved_addresses table with true flag and then drop the favourites table.
Required by #6546
`FirstMessageTimestamp` enables members of the community to determine if
there are any messages they can fetch on the community channel(chat).
`FirstMessageTimestamp` is advertised by admin for each community chat
through `CommunityDescription`. It assumes admin is online frequently
enough to capture the first channel message.
For existing communities admin determines first message timestamp by
finding oldest chat message in its local database.
task: status-im/status-desktop#6731
This introduces a new table to store discord message authors.
The main reason this table is being introduce is so that we don't have
to duplicate discord message author information in the `user_messages`
table when importing discord communities (ongoing work).
In addition to the table there are also two new APIs on the messenger
persistence layer (which are later used in the import logic):
- `HasDiscordMessageAuthor`
- `SaveDiscordMessageAuthor`
Closes#2759
As part of the new Discord <-> Status Community Import functionality,
we're adding an API that extracts all discord categories and channels
from a previously exported discord export file.
These APIs can be used in clients to show the user what categories and
channels will be imported later on.
There are two APIs:
1. `Messenger.ExtractDiscordCategoriesAndChannels(filesToimport
[]string) (*MessengerResponse, map[string]*discord.ImportError)`
This takes a list of exported discord export (JSON) files (typically one per
channel), reads them, and extracts the categories and channels into
dedicated data structures (`[]DiscordChannel` and `[]DiscordCategory`)
It also returns the oldest message timestamp found in all extracted
channels.
The API is synchronous and returns the extracted data as
a `*MessengerResponse`. This allows to make the API available
status-go's RPC interface.
The error case is a `map[string]*discord.ImportError` where each key
is a file path of a JSON file that we tried to extract data from, and
the value a `discord.ImportError` which holds an error message and an
error code, allowing for distinguishing between "critical" errors and
"non-critical" errors.
2. `Messenger.RequestExtractDiscordCategoriesAndChannels(filesToImport
[]string)`
This is the asynchronous counterpart to
`ExtractDiscordCategoriesAndChannels`. The reason this API has been
added is because discord servers can have a lot of message and
channel data, which causes `ExtractDiscordCategoriesAndChannels` to
block the thread for too long, making apps potentially feel like they
are stuck.
This API runs inside a go routine, eventually calls
`ExtractDiscordCategoriesAndChannels`, and then emits a newly
introduced `DiscordCategoriesAndChannelsExtractedSignal` that clients
can react to.
Failure of extraction has to be determined by the
`discord.ImportErrors` emitted by the signal.
**A note about exported discord history files**
We expect users to export their discord histories via the
[DiscordChatExporter](https://github.com/Tyrrrz/DiscordChatExporter/wiki/GUI%2C-CLI-and-Formats-explained#exportguild)
tool. The tool allows to export the data in different formats, such as
JSON, HTML and CSV.
We expect users to have their data exported as JSON.
Closes: https://github.com/status-im/status-desktop/issues/6690
This commit introduces a few changes regarding users accessing
communities:
While the APIs still exist, community invites should no longer be
used, instead communities should merely be "shared".
Sharing a community to users allows users to "join" the community,
which in reality makes them request access to that community.
This means, users have to request access to any community, even if
the community has permissions set to NO_MEMBERSHIP
Only difference between ON_REQUEST and NO_MEMBERSHIP is that
ON_REQUEST communities require manual approval of the owner/admin
to access a community. NO_MEMBERSHIP communities accept
automatically (as soon as owner/admin receives the request).
This also implies that users are no longer optimistically added to the
member list of communities, but only after they have been accepted.
This introduces a bit of a message ping-pong for users to know that
someone is now part of a community
fix: add verification request to response
fix: code review
add missing functions and simplify timestamp usage
fix: sync verification requests
feat: add endpoint to fetch all received verification requests
feat: add signal when trusting verification request
Co-authored-by: Jonathan Rainville <rainville.jonathan@gmail.com>
This introduces a simple garbage collection which checks for all soft
deleted bookmarks (`removed = 1`) which have been marked for garbage
collection more than 30 days ago from the time of bootstapping the
messenger.
Closes#2705
These APIs are being introduced to address #2706 and #2704, provided
that clients will move to using these APIs instead of the currently
provided equivalent APIs in the browser service.
The `bookmarks` table is being extended with a `deleted_at` field which
can later be used for garbage collection, as "removing" a bookmark is
merely soft deletion and doesn't actually remove the data from the
database.
In addition to those APIs adding and soft deleting bookmark entries,
they also automatically perform a sync operation to ensure that
bookmarks are synced in real-time (#2704).
Closes#2706, #2704
This commit introduces a new `clock` field in the
`communities_settings` table so that it can be leveraged for syncing
community settings across devices.
It alsoe exends existing `syncCommunity` APIs to generate
`SyncCommunitySettings` as well, avoiding sending additional sync messages
for community settings.
When editing communities however, we still sync community settings
explicitly are we aren't syncing the community itself in that case.
This allows to store community admin settings that are meant to be propagated
to community members (as opposed to the already existing
`CommunitySettings` which are considered local to every account).
The first setting introduced as part of this commit is one that enables
community admins to configure whether or not members of the community
are allowed to pin messages in community channels.
Prior to this commit, this was not restricted at all on the protocol
level and only enforced by clients via UI (e.g. members don't see an
option to pin messages, although they could).
This config setting now ensures that:
1. If turned off, members cannot send a pin message
2. If turned off, pin messages from members are not handled/processed
This is needed by https://github.com/status-im/status-desktop/issues/5662
This introduces the ability for status notes to handle community
history archive magnetlinks. To make this work, a few things are needed:
1. A new database table has been introduced to store message archive
hashes. This is necessary so status nodes can determine whether or
not they need to download a certain archive
2. The messenger's `handleRetrievedMessages()` has been exteded to take
magnetlink messages into account
3. New APIs were added to download torrent data given a magnetlink and
also to extract messages from downloaded archives, which are then
later fed to `handleRetrievedMessages`
Closes#2568
This introduces logic needed to:
- Create WakuMessageArchives and and indices from store waku messages
- History archive torrent data to disk and create .torrent file from
that
- Seed and unseed history archive torrents as necessary
- Starting/stopping the torrent client
- Enabling/disabling community history support for individual components
and starting/stopping the routine intervals accordingly
This does not yet handle magnet links (#2568)
Closes#2567
This is needed so that when they are bundled into archives, receiving
nodes can still verify the messages payload using its signature.
This commit introduces a new `waku_messages` table and APIs to store
such messages. Waku message payload is store for any message that has
a topic that matches any of the admin communities chats.
Closes#2566
* Sync Settings
* Added valueHandlers and Database singleton
Some issues remain, need a way to comparing incoming sql.DB to check if the connection is to a different file or not. Maybe make singleton instance per filename
* Added functionality to check the sqlite filename
* Refactor of Database.SaveSyncSettings to be used as a handler
* Implemented inteface for setting sync protobuf factories
* Refactored and completed adhoc send setting sync
* Tidying up
* Immutability refactor
* Refactor settings into dedicated package
* Breakout structs
* Tidy up
* Refactor of bulk settings sync
* Bug fixes
* Addressing feedback
* Fix code dropped during rebase
* Fix for db closed
* Fix for node config related crashes
* Provisional fix for type assertion - issue 2
* Adding robust type assertion checks
* Partial fix for null literal db storage and json encoding
* Fix for passively handling nil sql.DB, and checking if elem has len and if len is 0
* Added test for preferred name behaviour
* Adding saved sync settings to MessengerResponse
* Completed granular initial sync and clock from network on save
* add Settings to isEmpty
* Refactor of protobufs, partially done
* Added syncSetting receiver handling, some bug fixes
* Fix for sticker packs
* Implement inactive flag on sync protobuf factory
* Refactor of types and structs
* Added SettingField.CanSync functionality
* Addressing rebase artifact
* Refactor of Setting SELECT queries
* Refactor of string return queries
* VERSION bump and migration index bump
* Deactiveate Sync Settings
* Deactiveated preferred_name and send_status_updates
Co-authored-by: Andrea Maria Piana <andrea.maria.piana@gmail.com>
These are used to store settings for individual communities a
user is part of, either as member or as owner.
This included whether or not the community history archive protocol
is enabled.
This adds a new `CommunitySettings` type and adds
a migration script that introduces a new `communities_settings`
table.
It also extends the `MessengerResponse` type to include
`CommunitySettings` which are honored when communities are being
added, edited, joined or left.
Lastly, this adds a new RPC API to retreive the settings.
Closes#2564
This commit introduces a new `TorrentConfig` as per #2563 with dedicated
default values and new options/flags for the status-go CLI.
Since it's part of `NodeConfig`, which is stored per user in the
database, this commit also adds a migration script that adds a new
`torrent_config` table and database APIs to insert and retreive the
torrent config.
Closes#2563
This commit enables mailserver cycle logic by default and make a few
changes:
1) Nodes are graylisted instead of being blacklisted for a set amount of
time. The reason is that if we blacklist, any cut in connectivity
might result in long delays before reconnecting, especially on spotty
connections.
2) Fixes an issue on the devp2p server, whereby the node would not
connect to one of the static nodes since all the connection slots
where filled. The fix is a bit inelegant, it always connects to
static nodes, ignoring maxpeers, but it's tricky to get it to work
since the code is clearly not written to select a specific node.
3) Adds support to pinned mailservers
4) Add retries to mailservers requests. It uses a closure for now, I
think we should eventually have a channel etc, but I'd leave that for
later.
We periodically check whether we should backup contacts through waku.
Currently it's not tied to any event, it will only schedule a backup on
a tick, or through the provided API endpoint.