Merge branch 'main' into Pravdyvy/deterministic-key-derivation

This commit is contained in:
Pravdyvy 2025-11-11 15:40:33 +02:00
commit a8462ab5a3
13 changed files with 452 additions and 503 deletions

468
README.md
View File

@ -1,469 +1,33 @@
# nescience-testnet
This repo serves for Nescience Node testnet
This repo serves for Nescience testnet
For more details you can read [blogpost](https://vac.dev/rlog/Nescience-state-separation-architecture/)
For more details you can read [here](https://notes.status.im/Ya2wDpIyQquoiRiuEIM8hQ?view).
For more details on node functionality [here](https://www.notion.so/5-Testnet-initial-results-analysis-18e8f96fb65c808a835cc43b7a84cddf)
# Install dependencies
# How to run
Node and sequecer require Rust installation to build. Preferable latest stable version.
Rust can be installed as
Install build dependencies
- On Linux
```sh
apt install build-essential clang libssl-dev pkg-config
```
- On Mac
```sh
xcode-select --install
brew install pkg-config openssl
```
Install Rust
```sh
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
```
Node needs RISC0 toolchain to run.
It can be installed as
Install Risc0
```sh
curl -L https://risczero.com/install | bash
```
After that, before next step, you may need to restart your console, as script updates PATH variable. Next:
Then restart your shell and run
```sh
rzup install
```
After cloning this repository the following actions need to be done:
Entrypoints to node and sequencer are `node_runner` and `sequencer_runner`. Both of them have a configuration of similar manner. Path to configs need to be given into runner binaries as first arguent. No other arguments have to be given. We search given directory for files "node_config.json" for node and "sequencer_config.json" for sequencer.
With repository debug configs at `node_runner/configs/debug` and `sequencer_runner/configs/debug` are provided, you can use them, or modify as you wish.
For sequencer:
```yaml
{
"home": ".",
"override_rust_log": null,
"genesis_id": 1,
"is_genesis_random": true,
"max_num_tx_in_block": 20,
"block_create_timeout_millis": 10000,
"port": 3040
}
```
* "home" shows relative path to directory with datebase.
* "override_rust_log" sets env var "RUST_LOG" to achieve different log levels(if null, using present "RUST_LOG" value).
* "genesis_id" is id of genesis block.
* "is_genesis_random" - flag to randomise forst block.
* "max_num_tx_in_block" - transaction mempool limit.
* "block_create_timeout_millis" - block timeout.
* "port" - port, which sequencer will listen.
For node:
```yaml
{
"home": ".",
"override_rust_log": null,
"sequencer_addr": "http://127.0.0.1:3040",
"seq_poll_timeout_secs": 10,
"port": 3041
}
```
* "home" shows relative path to directory with datebase.
* "override_rust_log" sets env var "RUST_LOG" to achieve different log levels(if null, using present "RUST_LOG" value).
* "sequencer_addr" - address of sequencer.
* "seq_poll_timeout_secs" - polling interval on sequencer, in seconds.
* "port" - port, which sequencer will listen.
To run:
_FIRSTLY_ in sequencer_runner directory:
```sh
RUST_LOG=info cargo run <path-to-configs>
```
_SECONDLY_ in node_runner directory
```sh
RUST_LOG=info cargo run <path-to-configs>
```
# Node Public API
Node exposes public API with mutable and immutable methods to create and send transactions.
## Standards
Node supports JSON RPC 2.0 standard, details can be seen [there](https://www.jsonrpc.org/specification).
## API Structure
Right now API has only one endpoint for every request('/'), and JSON RPC 2.0 standard request structure is fairly simple
```yaml
{
"jsonrpc": "2.0",
"id": $number_or_dontcare,
"method": $string,
"params": $object
}
```
Response strucuture will look as follows:
Success:
```yaml
{
"jsonrpc": "2.0",
"result": $object,
"id": "dontcare"
}
```
There $number - integer or string "dontcare", $string - string and $object - is some JSON object.
## Methods
* get_block
Get block data for specific block number.
Request:
```yaml
{
"jsonrpc": "2.0",
"id": $number_or_dontcare,
"method": "get_block",
"params": {
"block_id": $number
}
}
```
Response:
```yaml
{
"jsonrpc": "2.0",
"result": {
"block": $block
},
"id": $number_or_dontcare
}
```
There "block" field returns block for requested block id
* get_last_block
Get last block number.
Request:
```yaml
{
"jsonrpc": "2.0",
"id": $number_or_dontcare,
"method": "get_last_block",
"params": {}
}
```
Response:
```yaml
{
"jsonrpc": "2.0",
"result": {
"last_block": $number
},
"id": $number_or_dontcare
}
```
There "last_block" field returns number of last block
* write_register_account
Create new acccount with 0 public balance and no private UTXO.
Request:
```yaml
{
"jsonrpc": "2.0",
"id": $number_or_dontcare,
"method": "write_register_account",
"params": {}
}
```
Response:
```yaml
{
"jsonrpc": "2.0",
"result": {
"status": $string
},
"id": $number_or_dontcare
}
```
There "status" field shows address of generated account
* show_account_public_balance
Show account public balance, field "account_addr" can be taken from response in "write_register_account" request.
Request:
```yaml
{
"jsonrpc": "2.0",
"id": $number_or_dontcare,
"method": "show_account_public_balance",
"params": {
"account_addr": $string
}
}
```
Response:
```yaml
{
"jsonrpc": "2.0",
"result": {
"addr": $string,
"balance": $number
},
"id": $number_or_dontcare
}
```
Fields in response is self-explanatory.
* write_deposit_public_balance
Deposit public balance into account. Any amount under u64::MAX can be deposited, can overflow.
Due to hashing process(transactions currently does not have randomization factor), we can not send two deposits with same amount to one account.
Request:
```yaml
{
"jsonrpc": "2.0",
"id": $number_or_dontcare,
"method": "write_deposit_public_balance",
"params": {
"account_addr": $string,
"amount": $number
}
}
```
Response:
```yaml
{
"jsonrpc": "2.0",
"result": {
"status": "success"
},
"id": $number_or_dontcare
}
```
Fields in response is self-explanatory.
* write_mint_utxo
Mint private UTXO for account.
Due to hashing process(transactions currently does not have randomization factor), we can not send two mints with same amount to one account.
Request:
```yaml
{
"jsonrpc": "2.0",
"id": $number_or_dontcare,
"method": "write_mint_utxo",
"params": {
"account_addr": $string,
"amount": $number
}
}
```
Response:
```yaml
{
"jsonrpc": "2.0",
"result": {
"status": "success",
"utxo": {
"asset": [$number],
"commitment_hash": $string,
"hash": $string
}
},
"id": $number_or_dontcare
}
```
There in "utxo" field "hash" is used for viewing purposes, field "commitment_hash" is used for sending purposes.
* show_account_utxo
Show UTXO data for account. "utxo_hash" there can be taken from "hash" field in response for "write_mint_utxo" request
Request:
```yaml
{
"jsonrpc": "2.0",
"id": $number_or_dontcare,
"method": "show_account_utxo",
"params": {
"account_addr": $string,
"utxo_hash": $string
}
}
```
Response:
```yaml
{
"jsonrpc": "2.0",
"result": {
"amount": $number,
"asset": [$number],
"hash": $string
},
"id": $number_or_dontcare
}
```
Fields in response is self-explanatory.
* write_send_utxo_private
Send utxo from one account private balance into another(need to be different) private balance.
Both parties are is hidden.
Request:
```yaml
{
"jsonrpc": "2.0",
"id": $number_or_dontcare,
"method": "write_send_utxo_private",
"params": {
"account_addr_sender": $string,
"account_addr_receiver": $string,
"utxo_hash": $string,
"utxo_commitment": $string
}
}
```
Response:
```yaml
{
"jsonrpc": "2.0",
"result": {
"status": "success",
"utxo_result": {
"asset": [$number],
"commitment_hash": $string,
"hash": $string
}
},
"id": $number_or_dontcare
}
```
Be aware, that during this action old UTXO is nullified, hence can not be used anymore, even if present in owner private state.
* write_send_utxo_deshielded
Send utxo from one account private balance into another(not neccesary different account) public balance.
Sender is hidden.
Request:
```yaml
{
"jsonrpc": "2.0",
"id": $number_or_dontcare,
"method": "write_send_utxo_deshielded",
"params": {
"account_addr_sender": $string,
"account_addr_receiver": $string,
"utxo_hash": $string,
"utxo_commitment": $string
}
}
```
Response:
```yaml
{
"jsonrpc": "2.0",
"result": {
"status": "success"
},
"id": $number_or_dontcare
}
```
Fields in response is self-explanatory.
* write_send_utxo_shielded
Send amount from one account public balance into another(not neccesary different account) private balance.
Receiver is hidden.
Request:
```yaml
{
"jsonrpc": "2.0",
"id": $number_or_dontcare,
"method": "write_send_utxo_shielded",
"params": {
"account_addr_sender": $string,
"account_addr_receiver": $string,
"amount": $number
}
}
```
Response:
```yaml
{
"jsonrpc": "2.0",
"result": {
"status": "success",
"utxo_result": {
"asset": [$number],
"commitment_hash": $string,
"hash": $string
}
},
"id": $number_or_dontcare
}
```
Fields in response is self-explanatory.

View File

@ -42,3 +42,6 @@ path = "../key_protocol"
[dependencies.nssa]
path = "../nssa"
features = ["no_docker"]
[dependencies.key_protocol]
path = "../key_protocol"

View File

@ -4,6 +4,7 @@
"genesis_id": 1,
"is_genesis_random": true,
"max_num_tx_in_block": 20,
"mempool_max_size": 10000,
"block_create_timeout_millis": 10000,
"port": 3040,
"initial_accounts": [

View File

@ -16,13 +16,15 @@ use sequencer_runner::startup_sequencer;
use tempfile::TempDir;
use tokio::task::JoinHandle;
use crate::test_suite_map::prepare_function_map;
use crate::test_suite_map::{prepare_function_map, tps_test};
#[macro_use]
extern crate proc_macro_test_attribute;
pub mod test_suite_map;
mod tps_test_utils;
#[derive(Parser, Debug)]
#[clap(version)]
struct Args {
@ -118,9 +120,12 @@ pub async fn main_tests_runner() -> Result<()> {
match test_name.as_str() {
"all" => {
// Tests that use default config
for (_, fn_pointer) in function_map {
fn_pointer(home_dir.clone()).await;
}
// Run TPS test with its own specific config
tps_test().await;
}
_ => {
let fn_pointer = function_map.get(&test_name).expect("Unknown test name");

View File

@ -1,10 +1,20 @@
use std::{collections::HashMap, path::PathBuf, pin::Pin, time::Duration};
use anyhow::Result;
use std::{
collections::HashMap,
path::PathBuf,
pin::Pin,
time::{Duration, Instant},
};
use actix_web::dev::ServerHandle;
use common::{PINATA_BASE58, sequencer_client::SequencerClient};
use key_protocol::key_management::key_tree::chain_index::ChainIndex;
use log::info;
use nssa::{Address, ProgramDeploymentTransaction, program::Program};
use nssa_core::{NullifierPublicKey, encryption::shared_key_derivation::Secp256k1Point};
use sequencer_runner::startup_sequencer;
use tempfile::TempDir;
use tokio::task::JoinHandle;
use wallet::{
Command, SubcommandReturnValue, WalletCore,
cli::{
@ -22,7 +32,8 @@ use crate::{
ACC_RECEIVER, ACC_RECEIVER_PRIVATE, ACC_SENDER, ACC_SENDER_PRIVATE,
NSSA_PROGRAM_FOR_TEST_DATA_CHANGER, TIME_TO_WAIT_FOR_BLOCK_SECONDS,
fetch_privacy_preserving_tx, make_private_account_input_from_str,
make_public_account_input_from_str,
make_public_account_input_from_str, replace_home_dir_with_temp_dir_in_configs,
tps_test_utils::TpsTestManager,
};
use crate::{post_test, pre_test, verify_commitment_is_in_state};
@ -1640,3 +1651,83 @@ pub fn prepare_function_map() -> HashMap<String, TestFunction> {
function_map
}
#[allow(clippy::type_complexity)]
async fn pre_tps_test(
test: &TpsTestManager,
) -> Result<(ServerHandle, JoinHandle<Result<()>>, TempDir)> {
info!("Generating tps test config");
let mut sequencer_config = test.generate_tps_test_config();
info!("Done");
let temp_dir_sequencer = replace_home_dir_with_temp_dir_in_configs(&mut sequencer_config);
let (seq_http_server_handle, sequencer_loop_handle) =
startup_sequencer(sequencer_config).await?;
Ok((
seq_http_server_handle,
sequencer_loop_handle,
temp_dir_sequencer,
))
}
pub async fn tps_test() {
let num_transactions = 300 * 5;
let target_tps = 12;
let tps_test = TpsTestManager::new(target_tps, num_transactions);
let target_time = tps_test.target_time();
info!("Target time: {:?} seconds", target_time.as_secs());
let res = pre_tps_test(&tps_test).await.unwrap();
let wallet_config = fetch_config().await.unwrap();
let seq_client = SequencerClient::new(wallet_config.sequencer_addr.clone()).unwrap();
info!("TPS test begin");
let txs = tps_test.build_public_txs();
let now = Instant::now();
let mut tx_hashes = vec![];
for (i, tx) in txs.into_iter().enumerate() {
let tx_hash = seq_client.send_tx_public(tx).await.unwrap().tx_hash;
info!("Sent tx {i}");
tx_hashes.push(tx_hash);
}
for (i, tx_hash) in tx_hashes.iter().enumerate() {
loop {
if now.elapsed().as_millis() > target_time.as_millis() {
panic!("TPS test failed by timout");
}
let tx_obj = seq_client
.get_transaction_by_hash(tx_hash.clone())
.await
.inspect_err(|err| {
log::warn!(
"Failed to get transaction by hash {tx_hash:#?} with error: {err:#?}"
)
});
if let Ok(tx_obj) = tx_obj
&& tx_obj.transaction.is_some()
{
info!("Found tx {i} with hash {tx_hash}");
break;
}
}
}
let time_elapsed = now.elapsed().as_secs();
info!("TPS test finished successfully");
info!("Target TPS: {}", target_tps);
info!(
"Processed {} transactions in {}s",
tx_hashes.len(),
time_elapsed
);
info!("Target time: {:?}s", target_time.as_secs());
post_test(res).await;
}

View File

@ -0,0 +1,187 @@
use std::time::Duration;
use key_protocol::key_management::ephemeral_key_holder::EphemeralKeyHolder;
use nssa::{
Account, AccountId, Address, PrivacyPreservingTransaction, PrivateKey, PublicKey,
PublicTransaction,
privacy_preserving_transaction::{self as pptx, circuit},
program::Program,
public_transaction as putx,
};
use nssa_core::{
MembershipProof, NullifierPublicKey, account::AccountWithMetadata,
encryption::IncomingViewingPublicKey,
};
use sequencer_core::config::{AccountInitialData, CommitmentsInitialData, SequencerConfig};
pub(crate) struct TpsTestManager {
public_keypairs: Vec<(PrivateKey, Address)>,
target_tps: u64,
}
impl TpsTestManager {
/// Generates public account keypairs. These are used to populate the config and to generate valid
/// public transactions for the tps test.
pub(crate) fn new(target_tps: u64, number_transactions: usize) -> Self {
let public_keypairs = (1..(number_transactions + 2))
.map(|i| {
let mut private_key_bytes = [0u8; 32];
private_key_bytes[..8].copy_from_slice(&i.to_le_bytes());
let private_key = PrivateKey::try_new(private_key_bytes).unwrap();
let public_key = PublicKey::new_from_private_key(&private_key);
let address = Address::from(&public_key);
(private_key, address)
})
.collect::<Vec<_>>();
Self {
public_keypairs,
target_tps,
}
}
pub(crate) fn target_time(&self) -> Duration {
let number_transactions = (self.public_keypairs.len() - 1) as u64;
Duration::from_secs_f64(number_transactions as f64 / self.target_tps as f64)
}
///
/// Build a batch of public transactions to submit to the node.
pub fn build_public_txs(&self) -> Vec<PublicTransaction> {
// Create valid public transactions
let program = Program::authenticated_transfer_program();
let public_txs: Vec<PublicTransaction> = self
.public_keypairs
.windows(2)
.map(|pair| {
let amount: u128 = 1;
let message = putx::Message::try_new(
program.id(),
[pair[0].1, pair[1].1].to_vec(),
[0u128].to_vec(),
amount,
)
.unwrap();
let witness_set =
nssa::public_transaction::WitnessSet::for_message(&message, &[&pair[0].0]);
PublicTransaction::new(message, witness_set)
})
.collect();
public_txs
}
/// Generates a sequencer configuration with initial balance in a number of public accounts.
/// The transactions generated with the function `build_public_txs` will be valid in a node started
/// with the config from this method.
pub(crate) fn generate_tps_test_config(&self) -> SequencerConfig {
// Create public public keypairs
let initial_public_accounts = self
.public_keypairs
.iter()
.map(|(_, addr)| AccountInitialData {
addr: addr.to_string(),
balance: 10,
})
.collect();
// Generate an initial commitment to be used with the privacy preserving transaction
// created with the `build_privacy_transaction` function.
let sender_nsk = [1; 32];
let sender_npk = NullifierPublicKey::from(&sender_nsk);
let account = Account {
balance: 100,
nonce: 0xdeadbeef,
program_owner: Program::authenticated_transfer_program().id(),
data: vec![],
};
let initial_commitment = CommitmentsInitialData {
npk: sender_npk,
account,
};
SequencerConfig {
home: ".".into(),
override_rust_log: None,
genesis_id: 1,
is_genesis_random: true,
max_num_tx_in_block: 300,
mempool_max_size: 10000,
block_create_timeout_millis: 12000,
port: 3040,
initial_accounts: initial_public_accounts,
initial_commitments: vec![initial_commitment],
signing_key: [37; 32],
}
}
}
/// Builds a single privacy transaction to use in stress tests. This involves generating a proof so
/// it may take a while to run. In normal execution of the node this transaction will be accepted
/// only once. Disabling the node's nullifier uniqueness check allows to submit this transaction
/// multiple times with the purpose of testing the node's processing performance.
#[allow(unused)]
fn build_privacy_transaction() -> PrivacyPreservingTransaction {
let program = Program::authenticated_transfer_program();
let sender_nsk = [1; 32];
let sender_isk = [99; 32];
let sender_ipk = IncomingViewingPublicKey::from_scalar(sender_isk);
let sender_npk = NullifierPublicKey::from(&sender_nsk);
let sender_pre = AccountWithMetadata::new(
Account {
balance: 100,
nonce: 0xdeadbeef,
program_owner: program.id(),
data: vec![],
},
true,
AccountId::from(&sender_npk),
);
let recipient_nsk = [2; 32];
let recipient_isk = [99; 32];
let recipient_ipk = IncomingViewingPublicKey::from_scalar(recipient_isk);
let recipient_npk = NullifierPublicKey::from(&recipient_nsk);
let recipient_pre =
AccountWithMetadata::new(Account::default(), false, AccountId::from(&recipient_npk));
let eph_holder_from = EphemeralKeyHolder::new(&sender_npk);
let sender_ss = eph_holder_from.calculate_shared_secret_sender(&sender_ipk);
let sender_epk = eph_holder_from.generate_ephemeral_public_key();
let eph_holder_to = EphemeralKeyHolder::new(&recipient_npk);
let recipient_ss = eph_holder_to.calculate_shared_secret_sender(&recipient_ipk);
let recipient_epk = eph_holder_from.generate_ephemeral_public_key();
let balance_to_move: u128 = 1;
let proof: MembershipProof = (
1,
vec![[
170, 10, 217, 228, 20, 35, 189, 177, 238, 235, 97, 129, 132, 89, 96, 247, 86, 91, 222,
214, 38, 194, 216, 67, 56, 251, 208, 226, 0, 117, 149, 39,
]],
);
let (output, proof) = circuit::execute_and_prove(
&[sender_pre, recipient_pre],
&Program::serialize_instruction(balance_to_move).unwrap(),
&[1, 2],
&[0xdeadbeef1, 0xdeadbeef2],
&[
(sender_npk.clone(), sender_ss),
(recipient_npk.clone(), recipient_ss),
],
&[(sender_nsk, proof)],
&program,
)
.unwrap();
let message = pptx::message::Message::try_from_circuit_output(
vec![],
vec![],
vec![
(sender_npk, sender_ipk, sender_epk),
(recipient_npk, recipient_ipk, recipient_epk),
],
output,
)
.unwrap();
let witness_set = pptx::witness_set::WitnessSet::for_message(&message, proof, &[]);
pptx::PrivacyPreservingTransaction::new(message, witness_set)
}

View File

@ -7,7 +7,7 @@ use storage::RocksDBIO;
pub struct SequecerBlockStore {
dbio: RocksDBIO,
// TODO: Consider adding the hashmap to the database for faster recovery.
tx_hash_to_block_map: HashMap<HashType, u64>,
pub tx_hash_to_block_map: HashMap<HashType, u64>,
pub genesis_id: u64,
pub signing_key: nssa::PrivateKey,
}
@ -28,7 +28,7 @@ impl SequecerBlockStore {
HashMap::new()
};
let dbio = RocksDBIO::new(location, genesis_block)?;
let dbio = RocksDBIO::open_or_create(location, genesis_block)?;
let genesis_id = dbio.get_meta_first_block_in_db()?;
@ -71,7 +71,7 @@ impl SequecerBlockStore {
}
}
fn block_to_transactions_map(block: &Block) -> HashMap<HashType, u64> {
pub(crate) fn block_to_transactions_map(block: &Block) -> HashMap<HashType, u64> {
block
.body
.transactions

View File

@ -28,6 +28,8 @@ pub struct SequencerConfig {
pub is_genesis_random: bool,
///Maximum number of transactions in block
pub max_num_tx_in_block: usize,
///Mempool maximum size
pub mempool_max_size: usize,
///Interval in which blocks produced
pub block_create_timeout_millis: u64,
///Port to listen

View File

@ -1,6 +1,8 @@
use std::fmt::Display;
use std::{fmt::Display, time::Instant};
use anyhow::Result;
#[cfg(feature = "testnet")]
use common::PINATA_BASE58;
use common::{
HashType,
block::HashableBlockData,
@ -9,14 +11,16 @@ use common::{
use config::SequencerConfig;
use log::warn;
use mempool::MemPool;
use sequencer_store::SequecerChainStore;
use serde::{Deserialize, Serialize};
use crate::block_store::SequecerBlockStore;
pub mod block_store;
pub mod config;
pub mod sequencer_store;
pub struct SequencerCore {
pub store: SequecerChainStore,
pub state: nssa::V02State,
pub block_store: SequecerBlockStore,
pub mempool: MemPool<EncodedTransaction>,
pub sequencer_config: SequencerConfig,
pub chain_height: u64,
@ -39,6 +43,24 @@ impl std::error::Error for TransactionMalformationError {}
impl SequencerCore {
pub fn start_from_config(config: SequencerConfig) -> Self {
let hashable_data = HashableBlockData {
block_id: config.genesis_id,
transactions: vec![],
prev_block_hash: [0; 32],
timestamp: 0,
};
let signing_key = nssa::PrivateKey::try_new(config.signing_key).unwrap();
let genesis_block = hashable_data.into_block(&signing_key);
//Sequencer should panic if unable to open db,
//as fixing this issue may require actions non-native to program scope
let block_store = SequecerBlockStore::open_db_with_genesis(
&config.home.join("rocksdb"),
Some(genesis_block),
signing_key,
)
.unwrap();
let mut initial_commitments = vec![];
for init_comm_data in config.initial_commitments.clone() {
@ -53,18 +75,47 @@ impl SequencerCore {
initial_commitments.push(comm);
}
Self {
store: SequecerChainStore::new_with_genesis(
&config.home,
config.genesis_id,
config.is_genesis_random,
&config.initial_accounts,
&initial_commitments,
nssa::PrivateKey::try_new(config.signing_key).unwrap(),
),
let init_accs: Vec<(nssa::Address, u128)> = config
.initial_accounts
.iter()
.map(|acc_data| (acc_data.addr.parse().unwrap(), acc_data.balance))
.collect();
let mut state = nssa::V02State::new_with_genesis_accounts(&init_accs, &initial_commitments);
#[cfg(feature = "testnet")]
state.add_pinata_program(PINATA_BASE58.parse().unwrap());
let mut this = Self {
state,
block_store,
mempool: MemPool::default(),
chain_height: config.genesis_id,
sequencer_config: config,
};
this.sync_state_with_stored_blocks();
this
}
/// If there are stored blocks ahead of the current height, this method will load and process all transaction
/// in them in the order they are stored. The NSSA state will be updated accordingly.
fn sync_state_with_stored_blocks(&mut self) {
let mut next_block_id = self.sequencer_config.genesis_id + 1;
while let Ok(block) = self.block_store.get_block_at_id(next_block_id) {
for encoded_transaction in block.body.transactions {
let transaction = NSSATransaction::try_from(&encoded_transaction).unwrap();
// Process transaction and update state
self.execute_check_transaction_on_state(transaction)
.unwrap();
// Update the tx hash to block id map.
self.block_store
.tx_hash_to_block_map
.insert(encoded_transaction.hash(), next_block_id);
}
self.chain_height = next_block_id;
next_block_id += 1;
}
}
@ -103,7 +154,7 @@ impl SequencerCore {
})?;
let mempool_size = self.mempool.len();
if mempool_size >= self.sequencer_config.max_num_tx_in_block {
if mempool_size >= self.sequencer_config.mempool_max_size {
return Err(TransactionMalformationError::MempoolFullForRound);
}
@ -122,20 +173,17 @@ impl SequencerCore {
) -> Result<NSSATransaction, nssa::error::NssaError> {
match &tx {
NSSATransaction::Public(tx) => {
self.store
.state
self.state
.transition_from_public_transaction(tx)
.inspect_err(|err| warn!("Error at transition {err:#?}"))?;
}
NSSATransaction::PrivacyPreserving(tx) => {
self.store
.state
self.state
.transition_from_privacy_preserving_transaction(tx)
.inspect_err(|err| warn!("Error at transition {err:#?}"))?;
}
NSSATransaction::ProgramDeployment(tx) => {
self.store
.state
self.state
.transition_from_program_deployment_transaction(tx)
.inspect_err(|err| warn!("Error at transition {err:#?}"))?;
}
@ -146,6 +194,7 @@ impl SequencerCore {
///Produces new block from transactions in mempool
pub fn produce_new_block_with_mempool_transactions(&mut self) -> Result<u64> {
let now = Instant::now();
let new_block_height = self.chain_height + 1;
let mut num_valid_transactions_in_block = 0;
@ -167,7 +216,6 @@ impl SequencerCore {
}
let prev_block_hash = self
.store
.block_store
.get_block_at_id(self.chain_height)?
.header
@ -175,6 +223,8 @@ impl SequencerCore {
let curr_time = chrono::Utc::now().timestamp_millis() as u64;
let num_txs_in_block = valid_transactions.len();
let hashable_data = HashableBlockData {
block_id: new_block_height,
transactions: valid_transactions,
@ -182,12 +232,18 @@ impl SequencerCore {
timestamp: curr_time,
};
let block = hashable_data.into_block(&self.store.block_store.signing_key);
let block = hashable_data.into_block(&self.block_store.signing_key);
self.store.block_store.put_block_at_id(block)?;
self.block_store.put_block_at_id(block)?;
self.chain_height = new_block_height;
log::info!(
"Created block with {} transactions in {} seconds",
num_txs_in_block,
now.elapsed().as_secs()
);
Ok(self.chain_height)
}
}
@ -196,6 +252,7 @@ impl SequencerCore {
mod tests {
use base58::{FromBase58, ToBase58};
use common::test_utils::sequencer_sign_key_for_testing;
use nssa::PrivateKey;
use crate::config::AccountInitialData;
@ -219,6 +276,7 @@ mod tests {
genesis_id: 1,
is_genesis_random: false,
max_num_tx_in_block: 10,
mempool_max_size: 10000,
block_create_timeout_millis: 1000,
port: 8080,
initial_accounts,
@ -295,12 +353,10 @@ mod tests {
.unwrap();
let balance_acc_1 = sequencer
.store
.state
.get_account_by_address(&nssa::Address::new(acc1_addr))
.balance;
let balance_acc_2 = sequencer
.store
.state
.get_account_by_address(&nssa::Address::new(acc2_addr))
.balance;
@ -354,7 +410,6 @@ mod tests {
assert_eq!(
10000,
sequencer
.store
.state
.get_account_by_address(&nssa::Address::new(acc1_addr))
.balance
@ -362,7 +417,6 @@ mod tests {
assert_eq!(
20000,
sequencer
.store
.state
.get_account_by_address(&nssa::Address::new(acc2_addr))
.balance
@ -531,12 +585,10 @@ mod tests {
.unwrap();
let bal_from = sequencer
.store
.state
.get_account_by_address(&nssa::Address::new(acc1))
.balance;
let bal_to = sequencer
.store
.state
.get_account_by_address(&nssa::Address::new(acc2))
.balance;
@ -548,7 +600,7 @@ mod tests {
#[test]
fn test_push_tx_into_mempool_fails_mempool_full() {
let config = SequencerConfig {
max_num_tx_in_block: 1,
mempool_max_size: 1,
..setup_sequencer_config()
};
let mut sequencer = SequencerCore::start_from_config(config);
@ -635,7 +687,6 @@ mod tests {
.produce_new_block_with_mempool_transactions()
.unwrap();
let block = sequencer
.store
.block_store
.get_block_at_id(current_height)
.unwrap();
@ -678,7 +729,6 @@ mod tests {
.produce_new_block_with_mempool_transactions()
.unwrap();
let block = sequencer
.store
.block_store
.get_block_at_id(current_height)
.unwrap();
@ -690,10 +740,59 @@ mod tests {
.produce_new_block_with_mempool_transactions()
.unwrap();
let block = sequencer
.store
.block_store
.get_block_at_id(current_height)
.unwrap();
assert!(block.body.transactions.is_empty());
}
#[test]
fn test_restart_from_storage() {
let config = setup_sequencer_config();
let acc1_addr: nssa::Address = config.initial_accounts[0].addr.parse().unwrap();
let acc2_addr: nssa::Address = config.initial_accounts[1].addr.parse().unwrap();
let balance_to_move = 13;
// In the following code block a transaction will be processed that moves `balance_to_move`
// from `acc_1` to `acc_2`. The block created with that transaction will be kept stored in
// the temporary directory for the block storage of this test.
{
let mut sequencer = SequencerCore::start_from_config(config.clone());
let signing_key = PrivateKey::try_new([1; 32]).unwrap();
let tx = common::test_utils::create_transaction_native_token_transfer(
*acc1_addr.value(),
0,
*acc2_addr.value(),
balance_to_move,
signing_key,
);
sequencer.mempool.push_item(tx.clone());
let current_height = sequencer
.produce_new_block_with_mempool_transactions()
.unwrap();
let block = sequencer
.block_store
.get_block_at_id(current_height)
.unwrap();
assert_eq!(block.body.transactions, vec![tx.clone()]);
}
// Instantiating a new sequencer from the same config. This should load the existing block
// with the above transaction and update the state to reflect that.
let sequencer = SequencerCore::start_from_config(config.clone());
let balance_acc_1 = sequencer.state.get_account_by_address(&acc1_addr).balance;
let balance_acc_2 = sequencer.state.get_account_by_address(&acc2_addr).balance;
// Balances should be consistent with the stored block
assert_eq!(
balance_acc_1,
config.initial_accounts[0].balance - balance_to_move
);
assert_eq!(
balance_acc_2,
config.initial_accounts[1].balance + balance_to_move
);
}
}

View File

@ -104,10 +104,7 @@ impl JsonHandler {
let block = {
let state = self.sequencer_state.lock().await;
state
.store
.block_store
.get_block_at_id(get_block_req.block_id)?
state.block_store.get_block_at_id(get_block_req.block_id)?
};
let helperstruct = GetBlockDataResponse {
@ -123,7 +120,7 @@ impl JsonHandler {
let genesis_id = {
let state = self.sequencer_state.lock().await;
state.store.block_store.genesis_id
state.block_store.genesis_id
};
let helperstruct = GetGenesisIdResponse { genesis_id };
@ -176,7 +173,7 @@ impl JsonHandler {
let balance = {
let state = self.sequencer_state.lock().await;
let account = state.store.state.get_account_by_address(&address);
let account = state.state.get_account_by_address(&address);
account.balance
};
@ -203,7 +200,7 @@ impl JsonHandler {
addresses
.into_iter()
.map(|addr| state.store.state.get_account_by_address(&addr).nonce)
.map(|addr| state.state.get_account_by_address(&addr).nonce)
.collect()
};
@ -225,7 +222,7 @@ impl JsonHandler {
let account = {
let state = self.sequencer_state.lock().await;
state.store.state.get_account_by_address(&address)
state.state.get_account_by_address(&address)
};
let helperstruct = GetAccountResponse { account };
@ -246,7 +243,6 @@ impl JsonHandler {
let transaction = {
let state = self.sequencer_state.lock().await;
state
.store
.block_store
.get_transaction_by_hash(hash)
.map(|tx| borsh::to_vec(&tx).unwrap())
@ -265,7 +261,6 @@ impl JsonHandler {
let membership_proof = {
let state = self.sequencer_state.lock().await;
state
.store
.state
.get_proof_for_commitment(&get_proof_req.commitment)
};
@ -361,6 +356,7 @@ mod tests {
genesis_id: 1,
is_genesis_random: false,
max_num_tx_in_block: 10,
mempool_max_size: 1000,
block_create_timeout_millis: 1000,
port: 8080,
initial_accounts,

View File

@ -22,6 +22,7 @@ path = "../sequencer_rpc"
[dependencies.sequencer_core]
path = "../sequencer_core"
features = ["testnet"]
[dependencies.common]
path = "../common"

View File

@ -4,6 +4,7 @@
"genesis_id": 1,
"is_genesis_random": true,
"max_num_tx_in_block": 20,
"mempool_max_size": 1000,
"block_create_timeout_millis": 10000,
"port": 3040,
"initial_accounts": [

View File

@ -44,7 +44,7 @@ pub struct RocksDBIO {
}
impl RocksDBIO {
pub fn new(path: &Path, start_block: Option<Block>) -> DbResult<Self> {
pub fn open_or_create(path: &Path, start_block: Option<Block>) -> DbResult<Self> {
let mut cf_opts = Options::default();
cf_opts.set_max_write_buffer_number(16);
//ToDo: Add more column families for different data
@ -74,7 +74,6 @@ impl RocksDBIO {
let block_id = block.header.block_id;
dbio.put_meta_first_block_in_db(block)?;
dbio.put_meta_is_first_block_set()?;
dbio.put_meta_last_block_in_db(block_id)?;
Ok(dbio)