libp2p-test-plans/perf
Chad Nehemiah c39bfb9b32
feat(perf): add js-libp2p NodeJS (#244)
Co-authored-by: Max Inden <mail@max-inden.de>
2023-08-25 11:09:01 +02:00
..
impl feat(perf): add js-libp2p NodeJS (#244) 2023-08-25 11:09:01 +02:00
runner feat(perf): add js-libp2p NodeJS (#244) 2023-08-25 11:09:01 +02:00
terraform fix(perf): install nvm for ec2-user instead of root (#267) 2023-08-21 10:22:39 +02:00
Makefile feat(perf): add (automation, provision, build, run) tooling (#184) 2023-06-22 13:54:09 +02:00
README.md docs(perf): Added new CLI flags for iterations + updated docs (#258) 2023-08-23 23:19:11 -07:00

README.md

libp2p performance benchmarking

This project includes the following components:

  • terraform/: a Terraform scripts to provision infrastructure
  • impl/: implementations of the libp2p perf protocol running on top of e.g. go-libp2p, rust-libp2p or Go's std-library https stack
  • runner/: a set of scripts building and running the above implementations on the above infrastructure, reporting the results in benchmark-results.json

Benchmark results can be visualized with https://observablehq.com/@libp2p-workspace/performance-dashboard.

Running via GitHub Action

  1. Create a pull request with your changes on https://github.com/libp2p/test-plans/.
  2. Trigger GitHub Action for branch on https://github.com/libp2p/test-plans/actions/workflows/perf.yml (see Run workflow button).
  3. Wait for action run to finish and to push a commit to your branch.
  4. Visualize results on https://observablehq.com/@libp2p-workspace/performance-dashboard.

Running with Terraform on AWS manually

Prerequisites

Provision infrastructure

  1. Save your public SSH key as the file ./terraform/modules/short_lived/files/perf.pub; or generate a new key pair with make ssh-keygen and add it to your SSH agent with make ssh-add.
  2. cd terraform/configs/local
  3. terraform init
  4. terraform apply
  5. CLIENT_IP=$(terraform output -raw client_ip)
  6. SERVER_IP=$(terraform output -raw server_ip)

Notes

  • You may need to reset the infrastructure if you encounter any errors, you can do that by running terraform destroy and then terraform apply.

  • While running terraform you may encounter the following error:

      Error: collecting instance settings: reading EC2 Launch Template versions: couldn't find resource
      │
      │   with module.short_lived_server[0].aws_instance.perf,
      │   on ../../modules/short_lived/main.tf line 15, in resource "aws_instance" "perf":
      │   15: resource "aws_instance" "perf" {
    
  • If you set TF_VAR long_lived_enabled env variable to default to true terraform should spin up the long-lived resources that are required for the short-lived resources to be created.

  • It's best to destroy the infrastructure after you're done with your testing, you can do that by running terraform destroy.

Build and run libp2p implementations

Given you have provisioned your infrastructure, you can now build and run the libp2p implementations on the AWS instances.

  1. cd runner
  2. npm ci
  3. npm run start -- --client-public-ip $CLIENT_IP --server-public-ip $SERVER_IP
  • Note: The default number of iterations that perf will run is 10; desired iterations can be set with the --iterations <value> option.

Deprovision infrastructure

  1. cd terraform/configs/local
  2. terraform destroy

Adding a new implementation or a new version

  1. Add the implementation to new subdirectory in impl/*.
    • For a new implementation, create a folder impl/<your-implementation-name>/ e.g. go-libp2p
    • For a new version of an existing implementation, create a folder impl/<your-implementation-name>/<your-implementation-version>.
    • In that folder include a Makefile that builds an executable and stores it next to the Makefile under the name perf.
    • Requirements for the executable:
      • Running as a libp2p-perf server:
        • The perf server must not exit as it will be closed by the test runner.
        • The executable must accept the command flag --run-server which indicates it's running as server.
      • Running as a libp2p-perf client
        • Given that perf is a client driven set of benchmarks, the performance will be measured by the client.
          • Input via command line
            • --server-address
            • --transport (see runner/versions.ts for possible variants)
            • --upload-bytes number of bytes to upload per stream.
            • --download-bytes number of bytes to download per stream.
          • Output
            • Logging MUST go to stderr.
            • Measurement output is printed to stdout as JSON in the form of:
              {"latency": 0.246442851}
              
              Note that the measurement includes the time to (1) establish the connection, (2) upload the bytes and (3) download the bytes.
  2. For a new implementation, in impl/Makefile include your implementation in the all target.
  3. For a new version, reference version in runner/src/versions.ts.