nimbus-eth1/search/search_index.json

1 line
44 KiB
JSON

{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"index.html","title":"The Nimbus Fluffy Guide","text":"<p>Fluffy is the Nimbus client implementation of the Portal network specifications.</p> <p>The Portal Network aims to deliver a reliable, sync-free, and decentralized access to the Ethereum blockchain. The network can be used by a light client to get access to Ethereum data and as such become a drop-in replacement for full nodes by providing that data through the existing Ethereum JSON RPC Execution API.</p> <p>This book describes how to build, run and monitor the Fluffy client, and how to use and test its currently implemented functionality.</p> <p>To quickly get your Fluffy node up and running, follow the quickstart page:</p> <ul> <li>Quickstart for Linux / macOS users</li> <li>Quickstart for Windows users</li> </ul>"},{"location":"index.html#development-status","title":"Development status","text":"<p>The Portal Network is a project still in research phase. This client is thus still experimental.</p> <p>However, the Portal history and Portal beacon sub-networks are already operational and can be tested on the public testnet or in a local testnet.</p>"},{"location":"index.html#get-in-touch","title":"Get in touch","text":"<p>Need help with anything? Join us on Status and Discord.</p>"},{"location":"index.html#donate","title":"Donate","text":"<p>If you'd like to contribute to Nimbus development:</p> <ul> <li>Our donation address is <code>0xDeb4A0e8d9a8dB30a9f53AF2dCc9Eb27060c6557</code></li> <li>We're also listed on GitCoin</li> </ul>"},{"location":"index.html#disclaimer","title":"Disclaimer","text":"<p>This documentation assumes Nimbus Fluffy is in its ideal state. The project is still under heavy development. Please submit a Github issue if you come across a problem.</p>"},{"location":"access-content.html","title":"Access content on the Portal network","text":"<p>Once you have a Fluffy node connected to network with the JSON-RPC interface enabled, then you can access the content available on the Portal network.</p> <p>You can for example access execution layer blocks through the standardized JSON-RPC call <code>eth_getBlockByHash</code>:</p> <pre><code># Get the hash of a block from your favorite block explorer, e.g.:\nBLOCKHASH=0x34eea44911b19f9aa8c72f69bdcbda3ed933b11a940511b6f3f58a87427231fb # Replace this to the block hash of your choice\n# Run this command to get this block:\ncurl -s -X POST -H 'Content-Type: application/json' -d '{\"jsonrpc\":\"2.0\",\"id\":\"1\",\"method\":\"eth_getBlockByHash\",\"params\":[\"'${BLOCKHASH}'\", true]}' http://localhost:8545 | jq\n</code></pre> <p>Note</p> <p>The Portal testnet is slowly being filled up with historical data through bridge nodes. Because of this, more recent history data is more likely to be available.</p> <p>You can also use our <code>blockwalk</code> tool to walk down the blocks one by one: <pre><code>make blockwalk\n\nBLOCKHASH=0x34eea44911b19f9aa8c72f69bdcbda3ed933b11a940511b6f3f58a87427231fb # Replace this to the block hash of your choice\n./build/blockwalk --block-hash:${BLOCKHASH}\n</code></pre></p>"},{"location":"basics-for-developers.html","title":"The basics for developers","text":"<p>When working on Fluffy in the nimbus-eth1 repository, you can run the <code>env.sh</code> script to run a command with the right environment variables set. This means the vendored Nim and Nim modules will be used, just as when you use <code>make</code>.</p> <p>E.g.:</p> <pre><code># start a new interactive shell with the right env vars set\n./env.sh bash\n</code></pre> <p>More development tips can be found on the general nimbus-eth1 readme.</p> <p>The code follows the Status Nim Style Guide.</p>"},{"location":"basics-for-developers.html#nim-code-formatting","title":"Nim code formatting","text":"<p>The fluffy codebase is formatted with nph. Check out the this page on how to install nph.</p> <p>The fluffy CI tests check the code formatting according to the style rules of nph. Developers will need to make sure the code changes in PRs are formatted as such.</p> <p>Note</p> <p>In the future the nph formatting might be added within the build environment make targets or similar, but currently it is a manual step that developers will need to perform.</p>"},{"location":"beacon-content-bridging.html","title":"Bridging content into the Portal beacon network","text":""},{"location":"beacon-content-bridging.html#seeding-from-content-bridges","title":"Seeding from content bridges","text":"<p>Run a Fluffy node with the JSON-RPC API enabled.</p> <pre><code>./build/fluffy --rpc\n</code></pre> <p>Build &amp; run the <code>portal_bridge</code> for the beacon network: <pre><code>make portal_bridge\n\nTRUSTED_BLOCK_ROOT=0x1234567890123456789012345678901234567890123456789012345678901234 # Replace with trusted block root.\n# --rest-url = access to beacon node API, default http://127.0.0.1:5052\n./build/portal_bridge beacon --trusted-block-root:${TRUSTED_BLOCK_ROOT} --rest-url:http://127.0.0.1:5052\n</code></pre></p> <p>The <code>portal_bridge</code> will connect to Fluffy node over the JSON-RPC interface and start gossiping an <code>LightClientBootstrap</code> for given trusted block root and gossip backfill <code>LightClientUpdate</code>s.</p> <p>Next, it will gossip a new <code>LightClientOptimisticUpdate</code>, <code>LightClientFinalityUpdate</code> and <code>LightClientUpdate</code> as they become available.</p>"},{"location":"build-from-source.html","title":"Build from source","text":"<p>Building Fluffy from source ensures that all hardware-specific optimizations are turned on. The build process itself is simple and fully automated, but may take a few minutes.</p> <p>Nim</p> <p>Fluffy is written in the Nim programming language. The correct version will automatically be downloaded as part of the build process!</p>"},{"location":"build-from-source.html#prerequisites","title":"Prerequisites","text":"<p>Make sure you have all needed prerequisites.</p>"},{"location":"build-from-source.html#building-the-fluffy-client","title":"Building the Fluffy client","text":""},{"location":"build-from-source.html#1-clone-the-nimbus-eth1-repository","title":"1. Clone the <code>nimbus-eth1</code> repository","text":"<pre><code>git clone git@github.com:status-im/nimbus-eth1.git\ncd nimbus-eth1\n</code></pre>"},{"location":"build-from-source.html#2-run-the-fluffy-build-process","title":"2. Run the Fluffy build process","text":"<p>To build Fluffy and its dependencies, run:</p> <pre><code>make fluffy\n</code></pre> <p>This step can take several minutes. After it has finished, you can check if the build was successful by running:</p> <pre><code># See available command line options\n./build/fluffy --help\n</code></pre> <p>If you see the command-line options, your installation was successful! Otherwise, don't hesitate to reach out to us in the <code>#nimbus-fluffy</code> channel of our discord.</p>"},{"location":"build-from-source.html#keeping-fluffy-updated","title":"Keeping Fluffy updated","text":"<p>When you decide to upgrade Fluffy to a newer version, make sure to follow the how to upgrade page.</p>"},{"location":"connect-to-portal.html","title":"Connect to the Portal network","text":"<p>Connecting to the current Portal network is as easy as running following command:</p> <pre><code>./build/fluffy --rpc\n</code></pre> <p>This will connect to the public Portal mainnet which contains nodes of the different clients.</p> <p>Note</p> <p>By default the Fluffy node will connect to the bootstrap nodes of the public mainnet.</p> <p>When testing locally the <code>--network:none</code> option can be provided to avoid connecting to any of the default bootstrap nodes.</p> <p>The <code>--rpc</code> option will also enable the different JSON-RPC interfaces through which you can access the Portal Network.</p> <p>Fluffy fully supports the Portal Network JSON-RPC Specification.</p> <p>Fluffy also supports a small subset of the Execution JSON-RPC API.</p> <p>Note</p> <p>The end goal is to be able to fully support the Execution JSON-RPC API, however currently not all Portal networks are specified, implemented or rolled out to be able to provide this.</p>"},{"location":"db_pruning.html","title":"Database pruning","text":"<p>Default Fluffy runs with a specific storage capacity (<code>--storage-capacity=x</code>, default set to 2GB). This means that the node's radius is dynamically adjusted to not exceed the configured capacity. As soon as the storage capacity is to be exceeded the pruning of content takes place and a new smaller radius is set.</p> <p>As long as the configured storage capacity remains the same, pruning is done automatically.</p> <p>In case the storage capacity of a Fluffy node is changed, a manual step might be required. There are two scenarios possible: - Adjusting to a higher storage capacity - Adjusting to a lower storage capacity</p>"},{"location":"db_pruning.html#adjusting-to-a-higher-storage-capacity","title":"Adjusting to a higher storage capacity","text":"<p>This requires no manual steps as no pruning will be required. On the restart of the Fluffy node with a higher configured storage capacity, the initial radius will be increased to the maximum radius until the new storage capacity is reached. Then the automatic pruning will take place and the radius will be decreased.</p>"},{"location":"db_pruning.html#adjusting-to-a-lower-storage-capacity","title":"Adjusting to a lower storage capacity","text":"<p>When a Fluffy node is restarted with a lower storage capacity, pruning will take place automatically. The database will be pruned in intervals until the storage drops under the newly configured storage capacity. The radius will also be adjusted with each pruning cycle.</p> <p>However, on disk the database will not lower in size. This is because empty pages are kept in the SQL database until a vacuum command is done. To do this you can run the <code>--force-prune</code> option at start-up. Note that this will temporarily double the database storage capacity as a temporary copy of the database needs to be made. Because of this, the vacuum is not executed automatically but requires you to manually enable the <code>--force-prune</code> flag.</p> <p>You can also use the <code>fcli_db</code> tool its <code>prune</code> command on the database directly to force this vacuuming.</p> <p>Another simple but more drastic solution is to delete the <code>db</code> subdirectory in the <code>--data-dir</code> provided to your Fluffy node. This will start your Fluffy node with a fresh database.</p>"},{"location":"eth-data-exporter.html","title":"Exporting Ethereum content for Portal","text":""},{"location":"eth-data-exporter.html#eth_data_exporter","title":"eth_data_exporter","text":"<p>The <code>eth_data_exporter</code> is a tool to extract content from Ethereum EL or CL and prepare it as Portal content and content keys.</p> <p>The <code>eth_data_exporter</code> can export data for different Portal networks. Currently the <code>history</code> and the <code>beacon</code> networks are supported.</p> <p>Example commands:</p> <pre><code># Build the tool\nmake eth_data_exporter\n# See the different commands and options\n./build/eth_data_exporter --help\n</code></pre> <pre><code># Request of `BeaconLightClientUpdate`s and export into the Portal\n# network supported format\n./build/eth_data_exporter beacon exportLCUpdates --rest-url:http://testing.mainnet.beacon-api.nimbus.team --start-period:816 --count:4\n</code></pre>"},{"location":"fluffy-with-portal-hive.html","title":"Fluffy with Portal-hive","text":"<p>Fluffy is one of the Portal clients that is being tested with hive.</p> <p>To see the status of the tests for the current version you can access https://portal-hive.ethdevops.io/.</p>"},{"location":"fluffy-with-portal-hive.html#run-the-hive-tests-locally","title":"Run the hive tests locally","text":"<p>Build hive:</p> <pre><code>git clone https://github.com/ethereum/hive.git\ncd ./hive\ngo build .\n</code></pre> <p>Example commands for running test suites:</p> <pre><code># Run the history tests with the 3 different clients\n./hive --sim history --client fluffy,trin,ultralight\n\n# Run the state tests with only the fluffy client\n./hive --sim state --client fluffy\n\n# Access results through web-ui:\n```sh\ngo build ./cmd/hiveview\n./hiveview --serve --logdir ./workspace/logs\n</code></pre> <p>Note</p> <p>You can see all the implemented simulators in https://github.com/ethereum/hive/tree/master/simulators</p>"},{"location":"fluffy-with-portal-hive.html#build-a-local-development-docker-image-for-portal-hive","title":"Build a local development Docker image for portal-hive","text":"<p>To debug &amp; develop Fluffy code against portal-hive tests you might want to create a local development docker image for Fluffy.</p> <p>To do that follow next steps:</p> <p>1) Clone and build portal-hive, see above.</p> <p>2) Build the local development Docker image using the following command: <pre><code>docker build --tag fluffy-dev --file ./fluffy/tools/docker/Dockerfile.portalhive .\n</code></pre></p> <p>3) Modify the <code>FROM</code> tag in the portal-hive <code>Dockerfile</code> of fluffy at <code>portal-hive/clients/fluffy/Dockerfile</code> to use the image that was buid in step 2.</p> <p>4) Run the tests as usually.</p> <p>Warning</p> <p>The <code>./vendors</code> dir is dockerignored and cached. If you have to make local changes to one of the dependencies in that directory you will have to remove <code>vendors/</code> from <code>./fluffy/tools/docker/Dockerfile.portalhive.dockerignore</code>.</p>"},{"location":"history-content-bridging.html","title":"Bridging content: Portal history network","text":""},{"location":"history-content-bridging.html#from-content-bridges","title":"From content bridges","text":""},{"location":"history-content-bridging.html#seeding-history-data-with-the-portal_bridge","title":"Seeding history data with the <code>portal_bridge</code>","text":""},{"location":"history-content-bridging.html#step-1-run-a-portal-client","title":"Step 1: Run a Portal client","text":"<p>Run a Portal client with the Portal JSON-RPC API enabled, e.g. fluffy:</p> <pre><code>./build/fluffy --rpc --storage-capacity:0\n</code></pre> <p>Note: The <code>--storage-capacity:0</code> option is not required, but it is added here for the use case where the node its only focus is on gossiping content from the <code>portal_bridge</code>.</p>"},{"location":"history-content-bridging.html#step-2-run-an-el-client","title":"Step 2: Run an EL client","text":"<p>The <code>portal_bridge</code> needs access to the EL JSON-RPC API, either through a local Ethereum client or via a web3 provider.</p>"},{"location":"history-content-bridging.html#step-3-run-the-portal-bridge-in-history-mode","title":"Step 3: Run the Portal bridge in history mode","text":"<p>Build &amp; run the <code>portal_bridge</code>: <pre><code>make portal_bridge\n\nWEB3_URL=\"http://127.0.0.1:8546\" # Replace with your provider.\n./build/portal_bridge history --web3-url:${WEB3_URL}\n</code></pre></p> <p>Default the portal_bridge will run in <code>--latest</code> mode, which means that only the latest block content will be gossiped into the network.</p> <p>The portal_bridge also has a <code>--backfill</code> mode which will gossip pre-merge blocks from <code>era1</code> files into the network. Default the bridge will audit first whether the content is available on the network and if not it will gossip it into the network.</p> <p>E.g. run latest + backfill with audit mode: <pre><code>WEB3_URL=\"http://127.0.0.1:8546\" # Replace with your provider.\n./build/portal_bridge history --latest:true --backfill:true --audit:true --era1-dir:/somedir/era1/ --web3-url:${WEB3_URL}\n</code></pre></p>"},{"location":"history-content-bridging.html#seeding-post-merge-history-data-with-the-beacon_lc_bridge","title":"Seeding post-merge history data with the <code>beacon_lc_bridge</code>","text":"<p>The <code>beacon_lc_bridge</code> is more of a standalone bridge that does not require access to a full node with its EL JSON-RPC API. However it is also more limited in the functions it provides. It will start with the consensus light client sync and follow beacon block gossip. Once it is synced, the execution payload of new beacon blocks will be extracted and injected in the Portal network as execution headers and blocks.</p> <p>Note: The execution headers will come without a proof.</p> <p>The injection into the Portal network is done via the <code>portal_historyGossip</code> JSON-RPC endpoint of the running Fluffy node.</p> <p>Note: Backfilling of block bodies and headers is not yet supported.</p> <p>Run a Fluffy node with the JSON-RPC API enabled.</p> <pre><code>./build/fluffy --rpc\n</code></pre> <p>Build &amp; run the <code>beacon_lc_bridge</code>: <pre><code>make beacon_lc_bridge\n\nTRUSTED_BLOCK_ROOT=0x1234567890123456789012345678901234567890123456789012345678901234 # Replace with trusted block root.\n./build/beacon_lc_bridge --trusted-block-root=${TRUSTED_BLOCK_ROOT}\n</code></pre></p>"},{"location":"history-content-bridging.html#from-locally-stored-block-data","title":"From locally stored block data","text":""},{"location":"history-content-bridging.html#building-and-seeding-epoch-accumulators","title":"Building and seeding epoch accumulators","text":""},{"location":"history-content-bridging.html#step-1-building-the-epoch-accumulators","title":"Step 1: Building the epoch accumulators","text":"<ol> <li> <p>Set-up access to an Ethereum JSON-RPC endpoint (e.g. local geth instance) that can serve the data.</p> </li> <li> <p>Use the <code>eth_data_exporter</code> tool to download and store all block headers into *.e2s files arranged per epoch (8192 blocks):</p> </li> </ol> <pre><code>make eth_data_exporter\n\n./build/eth_data_exporter history exportEpochHeaders --data-dir:\"./user_data_dir/\"\n</code></pre> <p>This will store all block headers up till the merge block into *.e2s files in the assigned <code>--data-dir</code>.</p> <ol> <li>Build the master accumulator and the epoch accumulators:</li> </ol> <pre><code>./build/eth_data_exporter history exportAccumulatorData --writeEpochAccumulators --data-dir:\"./user_data_dir/\"\n</code></pre>"},{"location":"history-content-bridging.html#step-2-seed-the-epoch-accumulators-into-the-portal-network","title":"Step 2: Seed the epoch accumulators into the Portal network","text":"<p>Run Fluffy and trigger the propagation of data with the <code>portal_history_propagateEpochAccumulators</code> JSON-RPC API call:</p> <pre><code>./build/fluffy --rpc\n\n# From another terminal\ncurl -s -X POST -H 'Content-Type: application/json' -d '{\"jsonrpc\":\"2.0\",\"id\":\"1\",\"method\":\"portal_history_propagateEpochAccumulators\",\"params\":[\"./user_data_dir/\"]}' http://localhost:8545 | jq\n</code></pre>"},{"location":"history-content-bridging.html#step-3-optional-verify-that-all-epoch-accumulators-are-available","title":"Step 3 (Optional): Verify that all epoch accumulators are available","text":"<p>Run Fluffy and run the <code>content_verifier</code> tool to verify that all epoch accumulators are available on the history network:</p> <p>Make sure you still have a fluffy instance running, if not run: <pre><code>./build/fluffy --rpc\n</code></pre></p> <p>Run the <code>content_verifier</code> tool and see if all epoch accumulators are found: <pre><code>make content_verifier\n./build/content_verifier\n</code></pre></p>"},{"location":"history-content-bridging.html#downloading-seeding-block-data","title":"Downloading &amp; seeding block data","text":"<ol> <li>Set-up access to an Ethereum JSON-RPC endpoint (e.g. local geth instance) that can serve the data.</li> <li>Use the <code>eth_data_exporter</code> tool to download history data through the JSON-RPC endpoint into the format which is suitable for reading data into Fluffy client and propagating into the network:</li> </ol> <pre><code>make eth_data_exporter\n\n./build/eth_data_exporter history exportBlockData--initial-block:1 --end-block:10 --data-dir:\"/user_data_dir/\"\n</code></pre> <p>This will store blocks 1 to 10 into a json file located at <code>./user_data_dir/eth-history-data.json</code>.</p> <ol> <li>Run Fluffy and trigger the propagation of data with the <code>portal_history_propagate</code> JSON-RPC API call:</li> </ol> <pre><code>./build/fluffy --rpc\n\n# From another shell\ncurl -s -X POST -H 'Content-Type: application/json' -d '{\"jsonrpc\":\"2.0\",\"id\":\"1\",\"method\":\"portal_history_propagate\",\"params\":[\"./user_data_dir/eth-history-data.json\"]}' http://localhost:8545 | jq\n</code></pre>"},{"location":"metrics.html","title":"Metrics and their visualisation","text":"<p>In this page we'll cover how to enable metrics and how to use Grafana and Prometheus to help you visualize these real-time metrics concerning the Fluffy node.</p>"},{"location":"metrics.html#enable-metrics-in-fluffy","title":"Enable metrics in Fluffy","text":"<p>To enable metrics run Fluffy with the <code>--metrics</code> flag: <pre><code>./build/fluffy --metrics\n</code></pre> Default the metrics are available at http://127.0.0.1:8008/metrics.</p> <p>The address can be changed with the <code>--metrics-address</code> and <code>--metrics-port</code> options.</p> <p>This provides only a snapshot of the current metrics. In order track the metrics over time and to also visualise them one can use for example Prometheus and Grafana.</p>"},{"location":"metrics.html#visualisation-through-prometheus-and-grafana","title":"Visualisation through Prometheus and Grafana","text":"<p>The steps on how to set up metrics visualisation with Prometheus and Grafana is explained in this guide.</p> <p>A Fluffy specific dashboard can be found here.</p> <p>This is the dashboard used for our Fluffy testnet fleet. In order to use it locally, you will have to remove the <code>{job=\"nimbus-fluffy-metrics\"}</code> part from the <code>instance</code> and <code>container</code> variables queries in the dashboard settings. Or they can also be changed to a constant value.</p> <p>The other option would be to remove those variables and remove their usage in each panel query.</p>"},{"location":"prerequisites.html","title":"Prerequisites","text":"<p>The Fluffy client runs on Linux, macOS, Windows, and Android.</p>"},{"location":"prerequisites.html#build-prerequisites","title":"Build prerequisites","text":"<p>When building from source, you will need additional build dependencies to be installed:</p> <ul> <li>Developer tools (C compiler, Make, Bash, Git 2.9.4 or newer)</li> <li>CMake</li> </ul> LinuxmacOSWindowsAndroid <p>On common Linux distributions the dependencies can be installed with:</p> <pre><code># Debian and Ubuntu\nsudo apt-get install build-essential git cmake\n\n# Fedora\ndnf install @development-tools cmake\n\n# Arch Linux, using an AUR manager\nyourAURmanager -S base-devel cmake\n</code></pre> <p>With Homebrew:</p> <pre><code>brew install cmake\n</code></pre> <p>To build Fluffy on Windows, the MinGW-w64 build environment is recommended.</p> <ul> <li> <p>Install Mingw-w64 for your architecture using the \"MinGW-W64 Online Installer\":</p> <ol> <li>Select your architecture in the setup menu (<code>i686</code> on 32-bit, <code>x86_64</code> on 64-bit).</li> <li>Set threads to <code>win32</code>.</li> <li>Set exceptions to \"dwarf\" on 32-bit and \"seh\" on 64-bit.</li> <li>Change the installation directory to <code>C:\\mingw-w64</code> and add it to your system PATH in <code>\"My Computer\"/\"This PC\" -&gt; Properties -&gt; Advanced system settings -&gt; Environment Variables -&gt; Path -&gt; Edit -&gt; New -&gt; C:\\mingw-w64\\mingw64\\bin</code> (<code>C:\\mingw-w64\\mingw32\\bin</code> on 32-bit).</li> </ol> <p>Note</p> <p>If the online installer isn't working you can try installing <code>mingw-w64</code> through MSYS2.</p> </li> <li> <p>Install CMake.</p> </li> <li> <p>Install Git for Windows and use a \"Git Bash\" shell to clone nimbus-eth1 and build Fluffy.</p> </li> </ul> <ul> <li>Install the Termux app from FDroid or the Google Play store</li> <li>Install a PRoot of your choice following the instructions for your preferred distribution. The Ubuntu PRoot is known to contain all Fluffy prerequisites compiled on Arm64 architecture (the most common architecture for Android devices).</li> </ul> <p>Assuming you use Ubuntu PRoot:</p> <pre><code>apt install build-essential git\n</code></pre>"},{"location":"protocol-interop-testing.html","title":"Protocol Interoperability Testing","text":"<p>This document shows a set of commands that can be used to test the individual protocol messages per network (Discovery v5 and Portal networks), e.g. to test client protocol interoperability.</p> <p>Two ways are explained, the first, by keeping a node running and interacting with it through the JSON-RPC service. The second, by running cli applications that attempt to send 1 specific message and then shutdown.</p> <p>The first is more powerful and complete, the second one might be easier to do some quick testing.</p>"},{"location":"protocol-interop-testing.html#run-fluffy-and-test-protocol-messages-via-json-rpc-api","title":"Run Fluffy and test protocol messages via JSON-RPC API","text":"<p>First build Fluffy as explained here.</p> <p>Next run it with the JSON-RPC server enabled: <pre><code>./build/fluffy --rpc --bootstrap-node:enr:&lt;base64 encoding of ENR&gt;\n</code></pre></p>"},{"location":"protocol-interop-testing.html#testing-discovery-v5-layer","title":"Testing Discovery v5 Layer","text":"<p>Testing the Discovery v5 protocol messages:</p> <pre><code># Ping / Pong\ncurl -s -X POST -H 'Content-Type: application/json' -d '{\"jsonrpc\":\"2.0\",\"id\":\"1\",\"method\":\"discv5_ping\",\"params\":[\"enr:&lt;base64 encoding of ENR&gt;\"]}' http://localhost:8545 | jq\n\n# FindNode / Nodes\n# Extra parameter is an array of requested logarithmic distances\ncurl -s -X POST -H 'Content-Type: application/json' -d '{\"jsonrpc\":\"2.0\",\"id\":\"1\",\"method\":\"discv5_findNode\",\"params\":[\"enr:&lt;base64 encoding of ENR&gt;\", [254, 255, 256]]}' http://localhost:8545 | jq\n\n# TalkReq / TalkResp\n# Extra parameters are the protocol id and the request byte string, hex encoded.\ncurl -s -X POST -H 'Content-Type: application/json' -d '{\"jsonrpc\":\"2.0\",\"id\":\"1\",\"method\":\"discv5_talkReq\",\"params\":[\"enr:&lt;base64 encoding of ENR&gt;\", \"\", \"\"]}' http://localhost:8545 | jq\n\n# Read out the discover v5 routing table contents\ncurl -s -X POST -H 'Content-Type: application/json' -d '{\"jsonrpc\":\"2.0\",\"id\":\"1\",\"method\":\"discv5_routingTableInfo\",\"params\":[]}' http://localhost:8545 | jq\n</code></pre>"},{"location":"protocol-interop-testing.html#testing-portal-networks-layer","title":"Testing Portal Networks Layer","text":"<p>Testing the Portal wire protocol messages:</p> <pre><code># Ping / Pong\ncurl -s -X POST -H 'Content-Type: application/json' -d '{\"jsonrpc\":\"2.0\",\"id\":\"1\",\"method\":\"portal_statePing\",\"params\":[\"enr:&lt;base64 encoding of ENR&gt;\"]}' http://localhost:8545 | jq\n\n# FindNode / Nodes\n# Extra parameter is an array of requested logarithmic distances\ncurl -s -X POST -H 'Content-Type: application/json' -d '{\"jsonrpc\":\"2.0\",\"id\":\"1\",\"method\":\"portal_stateFindNodes\",\"params\":[\"enr:&lt;base64 encoding of ENR&gt;\", [254, 255, 256]]}' http://localhost:8545 | jq\n\n# FindContent / Content\n# A request with an invalid content key will not receive a response\ncurl -s -X POST -H 'Content-Type: application/json' -d '{\"jsonrpc\":\"2.0\",\"id\":\"1\",\"method\":\"portal_stateFindContent\",\"params\":[\"enr:&lt;base64 encoding of ENR&gt;\", \"02829bd824b016326a401d083b33d092293333a830d1c390624d3bd4e409a61a858e5dcc5517729a9170d014a6c96530d64dd8621d\"]}' http://localhost:8545 | jq\n\n# Read out the Portal state network routing table contents\ncurl -s -X POST -H 'Content-Type: application/json' -d '{\"jsonrpc\":\"2.0\",\"id\":\"1\",\"method\":\"portal_stateRoutingTableInfo\",\"params\":[]}' http://localhost:8545 | jq\n</code></pre> <p>The <code>portal_state_</code> prefix can be replaced for testing other networks such as <code>portal_history_</code>.</p>"},{"location":"protocol-interop-testing.html#test-discovery-and-portal-wire-protocol-messages-with-cli-tools","title":"Test Discovery and Portal Wire protocol messages with cli tools","text":""},{"location":"protocol-interop-testing.html#testing-discovery-v5-layer-dcli","title":"Testing Discovery v5 Layer: dcli","text":"<pre><code># Build dcli from nim-eth vendor module\n(cd vendor/nim-eth/; ../../env.sh nimble build_dcli)\n</code></pre> <p>With the <code>dcli</code> tool you can test the individual Discovery v5 protocol messages, e.g.:</p> <pre><code># Test Discovery Ping, should print the content of ping message\n./vendor/nim-eth/build/dcli ping enr:&lt;base64 encoding of ENR&gt;\n\n# Test Discovery FindNode, should print the content of the returned ENRs\n# Default a distance of 256 is requested, change this with --distance argument\n./vendor/nim-eth/build/dcli findnode enr:&lt;base64 encoding of ENR&gt;\n\n# Test Discovery TalkReq, should print the TalkResp content\n./vendor/nim-eth/build/dcli talkreq enr:&lt;base64 encoding of ENR&gt;\n</code></pre> <p>Each <code>dcli</code> run will default generate a new network key and thus a new node id and ENR.</p>"},{"location":"protocol-interop-testing.html#testing-portal-networks-layer-portalcli","title":"Testing Portal Networks Layer: portalcli","text":"<pre><code># Build portalcli\nmake portalcli\n</code></pre> <p>With the <code>portalcli</code> tool you can test the individual Portal wire protocol messages, e.g.:</p> <pre><code># Test Portal wire Ping, should print the content of ping message\n./build/portalcli ping enr:&lt;base64 encoding of ENR&gt;\n\n# Test Portal wire FindNode, should print the content of the returned ENRs\n# Default a distance of 256 is requested, change this with --distance argument\n./build/portalcli findnodes enr:&lt;base64 encoding of ENR&gt;\n\n# Test Portal wire FindContent, should print the returned content\n./build/portalcli findcontent enr:&lt;base64 encoding of ENR&gt;\n\n# Default the history network is tested, but you can provide another protocol id\n./build/portalcli ping enr:&lt;base64 encoding of ENR&gt; --protocol-id:0x500B\n</code></pre> <p>Each <code>portalcli</code> run will default generate a new network key and thus a new node id and ENR.</p>"},{"location":"quick-start-docker.html","title":"Quick start - Docker","text":"<p>This page takes you through the steps of getting the Fluffy Portal node running on the public network by use of the public Docker image.</p> <p>The docker image gets currently rebuild from latest master every night.</p>"},{"location":"quick-start-docker.html#steps","title":"Steps","text":"<p>To be added.</p>"},{"location":"quick-start-windows.html","title":"Quick start - Windows","text":"<p>This page takes you through the steps of getting the Fluffy Portal node running on the public network.</p> <p>The guide assumes Windows is being used. For Linux/macOS users follow this tutorial.</p> <p>Notice</p> <p>Running Fluffy on Windows is more experimental and less tested!</p>"},{"location":"quick-start-windows.html#steps","title":"Steps","text":""},{"location":"quick-start-windows.html#prerequisites","title":"Prerequisites","text":"<ul> <li>Developer tools (C compiler, Make, Bash, CMake, Git 2.9.4 or newer)</li> </ul> <p>If you need help installing these tools, you can consult our prerequisites page.</p> <p>Note</p> <p>To build Fluffy on Windows, the MinGW-w64 build environment is recommended. The build commands in the rest of this page assume the MinGW build environment is used.</p>"},{"location":"quick-start-windows.html#build-the-fluffy-client","title":"Build the Fluffy client","text":"<pre><code>git clone git@github.com:status-im/nimbus-eth1.git\ncd nimbus-eth1\nmingw32-make fluffy\n\n# Test if binary was successfully build by running the help command.\n./build/fluffy --help\n</code></pre>"},{"location":"quick-start-windows.html#run-a-fluffy-client-on-the-public-testnet","title":"Run a Fluffy client on the public testnet","text":"<pre><code># Connect to the Portal testnet bootstrap nodes and enable the JSON-RPC APIs\n./build/fluffy --rpc\n</code></pre>"},{"location":"quick-start-windows.html#try-requesting-a-execution-layer-block-from-the-network","title":"Try requesting a execution layer block from the network","text":"<p>The Portal testnet is slowly being filled up with historical data through bridge nodes. Because of this, more recent history data is more likely to be available. This can be tested by using the <code>eth_getBlockByHash</code> JSON-RPC from the execution JSON-RPC API.</p> <pre><code># Get the hash of a block from your favorite block explorer, e.g.:\nBLOCKHASH=0x34eea44911b19f9aa8c72f69bdcbda3ed933b11a940511b6f3f58a87427231fb # Replace this to the block hash of your choice\n# Run this command to get the block:\ncurl -s -X POST -H 'Content-Type: application/json' -d '{\"jsonrpc\":\"2.0\",\"id\":\"1\",\"method\":\"eth_getBlockByHash\",\"params\":[\"'${BLOCKHASH}'\", true]}' http://localhost:8545 | jq\n</code></pre>"},{"location":"quick-start-windows.html#update-and-rebuild-the-fluffy-client","title":"Update and rebuild the Fluffy client","text":"<p>In order to stay up to date you can pull the latest version from our master branch. There are currently released versions tagged.</p> <pre><code># From the nimbus-eth1 repository\ngit pull\n# To bring the git submodules up to date\nmingw32-make update\n\nmingw32-make fluffy\n</code></pre>"},{"location":"quick-start.html","title":"Quick start - Linux/macOS","text":"<p>This page takes you through the steps of getting the Fluffy Portal node running on the public network.</p> <p>The guide assumes Linux or macOS is being used. For Windows users follow this tutorial.</p>"},{"location":"quick-start.html#steps","title":"Steps","text":""},{"location":"quick-start.html#prerequisites","title":"Prerequisites","text":"<ul> <li>Developer tools (C compiler, Make, Bash, CMake, Git 2.9.4 or newer)</li> </ul> <p>If you need help installing these tools, you can consult our prerequisites page.</p>"},{"location":"quick-start.html#build-the-fluffy-client","title":"Build the Fluffy client","text":"<pre><code>git clone git@github.com:status-im/nimbus-eth1.git\ncd nimbus-eth1\nmake fluffy\n\n# Test if binary was successfully build by running the help command.\n./build/fluffy --help\n</code></pre>"},{"location":"quick-start.html#run-a-fluffy-client-on-the-public-testnet","title":"Run a Fluffy client on the public testnet","text":"<pre><code># Connect to the Portal testnet bootstrap nodes and enable the JSON-RPC APIs\n./build/fluffy --rpc\n</code></pre>"},{"location":"quick-start.html#try-requesting-a-execution-layer-block-from-the-network","title":"Try requesting a execution layer block from the network","text":"<p>The Portal testnet is slowly being filled up with historical data through bridge nodes. Because of this, more recent history data is more likely to be available. This can be tested by using the <code>eth_getBlockByHash</code> JSON-RPC from the execution JSON-RPC API.</p> <pre><code># Get the hash of a block from your favorite block explorer, e.g.:\nBLOCKHASH=0x34eea44911b19f9aa8c72f69bdcbda3ed933b11a940511b6f3f58a87427231fb # Replace this to the block hash of your choice\n# Run this command to get the block:\ncurl -s -X POST -H 'Content-Type: application/json' -d '{\"jsonrpc\":\"2.0\",\"id\":\"1\",\"method\":\"eth_getBlockByHash\",\"params\":[\"'${BLOCKHASH}'\", true]}' http://localhost:8545 | jq\n</code></pre>"},{"location":"quick-start.html#update-and-rebuild-the-fluffy-client","title":"Update and rebuild the Fluffy client","text":"<p>In order to stay up to date you can pull the latest version from our master branch. There are currently released versions tagged.</p> <pre><code># From the nimbus-eth1 repository\ngit pull\n# To bring the git submodules up to date\nmake update\n\nmake fluffy\n</code></pre>"},{"location":"run-local-testnet.html","title":"Running a local testnet","text":"<p>To easily start a local testnet you can use the <code>launch_local_testnet.sh</code> script. This script allows you to start <code>n</code> amount of nodes and then run several actions on them through the JSON-RPC API.</p>"},{"location":"run-local-testnet.html#run-the-local-testnet-script","title":"Run the local testnet script","text":"<pre><code># Run the script, default start 64 nodes\n./fluffy/scripts/launch_local_testnet.sh\n# Run the script with 16 nodes\n./fluffy/scripts/launch_local_testnet.sh -n 16\n\n# See the script help\n./fluffy/scripts/launch_local_testnet.sh --help\n</code></pre> <p>The nodes will be started and all nodes will use <code>node0</code> as bootstrap node.</p> <p>The <code>data-dir</code>s and logs of each node can be found in <code>./local_testnet_data/</code>.</p> <p>You can manually start extra nodes that connect to the network by providing any of the running nodes their ENR.</p> <p>E.g. to manually add a Fluffy node to the local testnet run:</p> <pre><code>./build/fluffy --rpc --portal-network:none --udp-port:9010 --nat:extip:127.0.0.1 --bootstrap-node:`cat ./local_testnet_data/node0/fluffy_node.enr`\n</code></pre>"},{"location":"test-suite.html","title":"Fluffy test suite","text":""},{"location":"test-suite.html#run-fluffy-test-suite","title":"Run Fluffy test suite","text":"<pre><code># From the nimbus-eth1 repository\nmake fluffy-test\n</code></pre>"},{"location":"test-suite.html#run-fluffy-local-testnet-script","title":"Run Fluffy local testnet script","text":"<pre><code>./fluffy/scripts/launch_local_testnet.sh --run-tests\n</code></pre> <p>Find more details on the usage and workings of the local testnet script here.</p>"},{"location":"testnet-beacon-network.html","title":"Testing beacon network on local testnet","text":"<p>This section explains how one can set up a local testnet together with a beacon network bridge in order to test if all nodes can do the beacon light client sync and stay up to date with the latest head of the chain.</p> <p>To accomodate this, the <code>launch_local_testnet.sh</code> script has the option to launch the Fluffy <code>portal_bridge</code> automatically and connect it to <code>node0</code> of the local tesnet.</p>"},{"location":"testnet-beacon-network.html#run-the-local-testnet-script-with-bridge","title":"Run the local testnet script with bridge","text":"<p>The <code>launch_local_testnet.sh</code> script must be launched with the <code>--trusted-block-root</code> cli option. The individual nodes will be started with this <code>trusted-block-root</code> and each node will try to start sync from this block root.</p> <p>Run the following command to launch the network with the <code>portal_bridge</code> activated for the beacon network.</p> <pre><code>TRUSTED_BLOCK_ROOT=0x1234567890123456789012345678901234567890123456789012345678901234 # Replace with trusted block root.\n\n# Run the script, start 8 nodes + portal_bridge\n./fluffy/scripts/launch_local_testnet.sh -n8 --trusted-block-root ${TRUSTED_BLOCK_ROOT} --portal-bridge\n</code></pre>"},{"location":"testnet-beacon-network.html#run-the-local-testnet-script-and-launch-the-bridge-manually","title":"Run the local testnet script and launch the bridge manually","text":"<p>To have control over when to start or restart the <code>portal_bridge</code> on can also control the bridge manually, e.g. start the testnet:</p> <pre><code>TRUSTED_BLOCK_ROOT=0x1234567890123456789012345678901234567890123456789012345678901234 # Replace with trusted block root.\n\n# Run the script, start 8 nodes\n./fluffy/scripts/launch_local_testnet.sh -n8 --trusted-block-root ${TRUSTED_BLOCK_ROOT}\n</code></pre> <p>Next, build and run the <code>portal_bridge</code> for the beacon network:</p> <pre><code>make portal_bridge\n\n# --rpc-port 10000 = default node0\n# --rest-url = access to beacon node API, default http://127.0.0.1:5052\n./build/portal_bridge beacon --trusted-block-root:${TRUSTED_BLOCK_ROOT} --rest-url:http://127.0.0.1:5052 --backfill-amount:128 --rpc-port:10000\n</code></pre>"},{"location":"testnet-history-network.html","title":"Testing history network on local testnet","text":"<p>There is an automated test for the Portal history network integrated in the <code>launch_local_testnet.sh</code> script.</p> <p>The <code>test_portal_testnet</code> binary can be run from within this script and do a set of actions on the nodes through the JSON-RPC API. When that is finished, all nodes will be killed.</p>"},{"location":"testnet-history-network.html#run-the-local-testnet-script-with-history-network-test","title":"Run the local testnet script with history network test","text":"<pre><code># Run the script, default start 64 nodes and run history tests\n./fluffy/scripts/launch_local_testnet.sh --run-tests\n</code></pre>"},{"location":"testnet-history-network.html#details-of-the-test_portal_testnet-test","title":"Details of the <code>test_portal_testnet</code> test","text":""},{"location":"testnet-history-network.html#initial-set-up","title":"Initial set-up","text":"<p>Following initial steps are done to set up the Discovery v5 network and the Portal networks:</p> <ol> <li>Nodes join the network by providing them all with one and the same bootstrap node at start-up.</li> <li>Attempt to add the ENRs of all the nodes to each node its routing table: This is done in order to quickly simulate a network that has all the nodes propagated around. The JSON-RPC <code>portal_historyAddEnr</code> is used for this.</li> <li>Select, at random, a node id of one of the nodes. Let every node do a lookup for this node id. This is done to validate that every node can successfully lookup a specific node in the DHT.</li> </ol>"},{"location":"testnet-history-network.html#data-propagation-test","title":"Data propagation test","text":"<p>How the content gets shared around and tested:</p> <ol> <li>First one node (the bootstrap node) will get triggered by a JSON-RPC call to load a data file that contains <code>n</code> amount of blocks ( <code>[header, [txs, uncles], receipts]</code>), and to propagate these over the network. This is done by doing offer request to <code>x</code> (= 8) neighbours to that content its id. This is practically the neighborhood gossip at work, but then initiated on the read of each block in the provided data file.</li> <li>Next, the nodes that accepted and received the content will do the same neighborhood gossip mechanism with the received content. And so on, until no node accepts any offers any more and the gossip dies out. This should propagate the content to (all) the neighbours of that content. TODO: This depends on the radii set and the amount of nodes in the network. More starter nodes are required to propagate with nodes at lower radii.</li> <li>The test binary will then read the same data file and, for each block hash, an <code>eth_getBlockByHash</code> JSON-RPC request is done to each node. A node will either load the block header or body from its own database or do a content lookup and retrieve them from the network, this depends on its own node id and radius. Following checks are currently done on the response:<ul> <li>Check if the block header and body content was found.</li> <li>The hash in returned data of eth_getBlockByHash matches the requested one.</li> <li>In case the block has transactions, check the block hash in the transaction object.</li> </ul> </li> </ol>"},{"location":"upgrade.html","title":"Upgrade","text":"<p>To upgrade to the latest version you need to update the nimbus-eth1 repository and re-compile Fluffy.</p> <p>Note</p> <p>In this state of development there are no official releases yet nor git tags for different versions.</p>"},{"location":"upgrade.html#upgrade-to-the-latest-version","title":"Upgrade to the latest version","text":"<p>Upgrading Fluffy when built from source is similar to the installation process.</p> <p>Run:</p> <pre><code># Download the updated source code\ngit pull &amp;&amp; make update\n\n# Build Fluffy from the newly updated source\nmake -j4 fluffy\n</code></pre> <p>Complete the upgrade by restarting the node.</p> <p>Tip</p> <p>To check which version of Fluffy you're currently running, run <code>./build/fluffy --version</code></p>"}]}