- Increase upload/download benchmark iterations. Still conservative value of `10` due to slow rust-libp2p tcp. See https://github.com/libp2p/rust-yamux/issues/162. Note however that this is 10x an upload of 100MB.
- Increase latency benchmark iterations.
* feat: run multidim-interop.yml on self-hosted runners
* feat: make worker count configurable
* feat: enable docker.io proxy on self-hosted runners
* chore: provide s3 creds through env
* Try 4 workers
* ci: run-multidim-interop on 16 workers
* chore: change self hosted labels
* Update .github/workflows/multidim-interop.yml
Co-authored-by: Piotr Galar <piotr.galar@gmail.com>
* Use self-hosted runners for now
This reverts commit 7e868ea6b486798c03ad5b88f14c1a78e51e7c26.
---------
Co-authored-by: Marco Munizaga <git@marcopolo.io>
Co-authored-by: Marco Munizaga <marco@marcopolo.io>
* fix(interop): use standaloneTransports as string instead of param
When generating the test specification (docker-compose file) we enumerate all
transport permutations. While some of our transports need to be upgraded (e.g.
TCP and WS) some don't (e.g. QUIC or WebRTC). To differentiate the two we
execute two SQL queries (`queryResults` and `standaloneTransportsQueryResults`),
each with either a `AND _ NOT IN` or `AND _ IN`.
The previous implementation would use `node-sqlite3` query parameters (see
https://github.com/TryGhost/node-sqlite3/wiki/API#runsql--param---callback) to
provide the list of transport protocols that don't need upgrading as an array.
The problem is that `node-sqlite3` does not support arrays as query parameters,
see https://github.com/TryGhost/node-sqlite3/issues/762.
This commit uses string interpolation instead to inject the list of standalone
transports into the SQL queries.
* Nit
---------
Co-authored-by: Marco Munizaga <git@marcopolo.io>
Co-authored-by: Marco Munizaga <marco@marcopolo.io>
- When experimenting with increasing iterations, instances are taken down
prematurely. See e.g. https://github.com/libp2p/test-plans/actions/runs/5418669631/jobs/9850992882
- When benchmarking from a local developer machine, experimenting for longer is useful.
- The AWS Lambda is only a safety net machanism. In the rare case where we
miss to clean up, running the machines for <1h or 2h doesn't matter cost wise.
This project includes the following components:
- `terraform/`: a Terraform scripts to provision infrastructure
- `impl/`: implementations of the [libp2p perf
protocol](https://github.com/libp2p/specs/blob/master/perf/perf.md) running on
top of e.g. go-libp2p, rust-libp2p or Go's std-library https stack
- `runner/`: a set of scripts building and running the above implementations on
the above infrastructure, reporting the results in `benchmark-results.json`
Benchmark results can be visualized with
https://observablehq.com/@mxinden-workspace/libp2p-performance-dashboard.
Co-authored-by: Marco Munizaga <git@marcopolo.io>
Co-authored-by: Marten Seemann <martenseemann@gmail.com>
Co-authored-by: Piotr Galar <piotr.galar@gmail.com>