* A wrapper crate for prometheus client
* Initial integration of metrics for mempool
* Merge mempool metrics imports
* Add cli flag to enable metrics
* Add nomos metrics service for serving metrics
* Use nomos prometheus metrics in the node
* Rename metrics to registry where applicable
* Expose metrics via http
* Featuregate the metrics service
* Style and fail on encode error
* Add metrics cargo feature for mempool
* Add chat demo for testnet
This commit adds a simple demo to showcase the capabilities of the
Nomos architecture. In particular, we want to leverage the DA
features and explore participants roles.
At the same time, we're not ready to commit to any speficic format
or decision regarding common ground yet.
For this reason, we chose to implement the demo at the Execution
Zone (EZ) level.
In contrast to the coordination layer, each execution
zone can decide on its own format, which allows us to experiment
without having to set a standard.
The application of choice for the demo is an (almost) instant
messaging app where the messages are broadcast to the public by
leveraging the full replication data availability protocol.
In this context, the cli app acts as a small EZ disseminating
blobs, promoting blob inclusion and updating its state (i.e. list
of exchanged messages) upon blob inclusion in the chain.
---------
Co-authored-by: danielsanchezq <sanchez.quiros.daniel@gmail.com>
Put a hard limit of 512 blocks in the response returned by
GetBlocks to avoid slowing things down. This number was chosen
rather arbitrarily. We might want to do some more fine tuning.
* Split long consensus response in separate APIs
Consensus info was returning the full list of blocks even though
that can get quite large with time. Instead, this commit change
that API to return a constant size message and adds a new one to
return a chain of blocks with user specified endings.
* Update nomos-services/consensus/src/lib.rs
Co-authored-by: Youngjoon Lee <taxihighway@gmail.com>
* Fix test
---------
Co-authored-by: Youngjoon Lee <taxihighway@gmail.com>
Block.id is a necessary piece of information in the context
of the consensus engine since there it's not possible to recover
the id of the block since the contents are not available.
Instead, we should only skip that field when serializing/deserializing
a full block.
* humanize array ser/deser
* split fns
* use `const-hex`
* fix fmt
* create `nomos-utils` crate
* Human serde committeeid (#478)
* Human readable serde for CommitteeId
* Deserialize bytes to string if human readable
* Don't allocate if possible in human serde bytes
---------
Co-authored-by: gusto <bacv@users.noreply.github.com>
* Change impl of StorageReceiver to Option<Bytes>
Load and remove messages return Option<Bytes> and not Bytes, so
let's change the implementation to work around that.
* Add storage/block http api to retrieve blocks from storage
* add tests for storage/block api
* debug tests
* tweak test node online condition
* Update deps
* Implement base lifecycle handling in network service
* Implement base lifecycle handling in storage service
* Use methods instead of functions
* Pipe lifecycle in metrics service
* Pipe lifecycle in mempool service
* Pipe lifecycle in log service
* Pipe lifecycle in da service
* Pipe lifecycle in consensus service
* Refactor handling of lifecycle message to should_stop_service
* Update overwatch version to fixed run_all one
* Use tree overlay in nomos node
* Use tree overlay in tests module
* Handle unexpected consensus vote stream end in tally
* Spawn the next leader first (#425)
* Report unhappy blocks in happy path test (#430)
* report unhappy blocks in the happy path test
* Make threshold configurable for TreeOverlay (#426)
* Modified test, so that all nodes don’t all connect to the first node only (#442)
* merge fix
---------
Co-authored-by: Youngjoon Lee <taxihighway@gmail.com>
Co-authored-by: Al Liu <scygliu1@gmail.com>
* Send DA certificate to node after dissemination
* rename mempool endpoints
* Check certificate inclusion in tests
* rename endpoint
* Rename addcert and addtx to add
* tweak test condition
* add option to save certificate to file
* move thread join
* remove fancy prints
* Use more descriptive names for generic parameters
We're starting to have tens on generic parameters, if we don't use
descriptive names it'll be pretty hard to understand what they do.
This commit changes the names of the mempool parameters in
preparation for the insertion of the da certificates mempool to
distinguish it from cl transactions.
* Add mempool for da certificates
* Add separate certificates mempool to binary
* ignore clippy lints
* Add `send` method to mempool network adapter
Centralize responsabilities for mempool-network interface in the
adapter trait.
* Update nomos-services/mempool/src/lib.rs
Co-authored-by: Youngjoon Lee <taxihighway@gmail.com>
---------
Co-authored-by: Youngjoon Lee <taxihighway@gmail.com>
Overwatch requires all services to have a different service id.
Unfortunately, such service id can't depend on generic parameters,
which means that we can't have two instances of the mempool
service even if they are instantiated with different types.
This commit circuments this limitation by adding another
type parameter.
* Make mempool item generic
Make the mempool generic with respect to the item and remove
mentions of specific transaction formats/traits. This will allow
us to reuse the same code for both coordination layer transactions
and certificates, or in general, whatever items need to be included
in a block.
* Add mempool network adapter settings
Allow for greater customization of the mempool network adapter by
adding a settings field.
* update node after mempool changes
* fix waku mempool adapter
* fmt
* fix tests
* fmt
* Use selection for blob certificates
* Fix bin imports
* Fix rebase
* Missing blobs -> certificates refactor
* Fix attestation and certificate as_bytes
* More naming refactors
* Make the data availability service work with multiple protocols
* Add a generic way to instantiate DaProtocol
Add settings type and a new `new(Self::Settings)` method to
build a new DaProtocol instance
* Add data availability service to node
* fix tests
* fix imports
* Add `mixnode` and `mixnet-client` crate (#302)
* Add `mixnode` binary (#317)
* Integrate mixnet with libp2p network backend (#318)
* Fix#312: proper delays (#321)
* proper delays
* add missing duration param
* tiny fix: compilation error caused by `rand` 0.8 -> 0.7
* use `get_available_port()` for mixnet integration tests (#333)
* add missing comments
* Overwatch mixnet node (#339)
* Add mixnet service and overwatch app
* remove #[tokio::main]
---------
Co-authored-by: Youngjoon Lee <taxihighway@gmail.com>
* fix tests for the overwatch mixnode (#342)
* fix panic when corner case happen in RandomDelayIter (#335)
* Use `log` service for `mixnode` bin (#341)
* Use `wire` for MixnetMessage in libp2p (#347)
* Prevent tmixnet tests from running forever (#363)
* Use random delay when sending msgs to mixnet (#362)
* fix a minor compilation error caused by the latest master
* Fix run output fd (#343)
* add a connection pool
* Exp backoff (#332)
* move mixnet listening into separate task
* add exponential retry for insufficient peers in libp2p
* fix logging
* Fix MutexGuard across await (#373)
* Fix MutexGuard across await
Holding a MutexGuard across an await point is not a good idea.
Removing that solves the issues we had with the mixnet test
* Make mixnode handle bodies coming from the same source concurrently (#372)
---------
Co-authored-by: Youngjoon Lee <taxihighway@gmail.com>
* Move wait at network startup (#338)
We now wait after the call to 'subscribe' to give the network
the time to register peers in the mesh before starting to
publish messages
* Remove unused functions from mixnet connpool (#374)
* Mixnet benchmark (#375)
* merge fixes
* add `connection_pool_size` field to `config.yaml`
* Simplify mixnet topology (#393)
* Simplify bytes and duration range ser/de (#394)
* optimize bytes serde and duration serde
---------
Co-authored-by: Al Liu <scygliu1@gmail.com>
Co-authored-by: Daniel Sanchez <sanchez.quiros.daniel@gmail.com>
Co-authored-by: Giacomo Pasini <Zeegomo@users.noreply.github.com>
Firstly, a failure in deserialization for a network message is not
an error especially since we're using a public channel.
Secondly, that same channel is shared by different kind of messages
so trying to interpret one as the other will surely lead to a
unsuccessfull attempt.