Infrastructure for Nimbus cluster https://nimbus.team
Go to file
Jakub Sokołowski 6aeb204614
nimbus.holesky: deploy windows-01 host with nodes
Currently we have issues building and running Beacon node:
https://github.com/status-im/nimbus-eth2/issues/6139

Signed-off-by: Jakub Sokołowski <jakub@status.im>
2024-03-26 12:42:26 +01:00
ansible nimbus.holesky: deploy windows-01 host with nodes 2024-03-26 12:42:26 +01:00
files
scripts ansible/inventory: update to use status.im domain 2024-03-15 12:07:45 +01:00
.gitignore
Makefile
README.md nimbus.mainnet: rename nodes to include bootstrap word 2024-03-05 10:41:52 +01:00
ansible.cfg ansible.cfg: use interpreter_python=auto_silent 2022-09-08 12:24:14 +02:00
common.tf
dash.tf drop statusim.net domain config in favor of status.im 2024-03-14 20:56:48 +01:00
eth1.tf drop statusim.net domain config in favor of status.im 2024-03-14 20:56:48 +01:00
fluffy.tf drop statusim.net domain config in favor of status.im 2024-03-14 20:56:48 +01:00
foreach.sh ansible/inventory: update to use status.im domain 2024-03-15 12:07:45 +01:00
geth.tf nimbus.holesky: add Geth hosts for MacOS and Windows 2024-03-21 16:53:36 +01:00
holesky.tf nimbus.holesky: deploy windows-01 host with nodes 2024-03-26 12:42:26 +01:00
logs.tf drop statusim.net domain config in favor of status.im 2024-03-14 20:56:48 +01:00
main.tf remove Artur's aws_key_pair resource 2022-08-08 10:31:37 +02:00
mainnet.tf mainnet.tf: bump bootstrap node volumes to 800 GB 2024-03-14 21:45:02 +01:00
outputs.tf nimbus.holesky: deploy windows-01 host with nodes 2024-03-26 12:42:26 +01:00
prater.tf drop statusim.net domain config in favor of status.im 2024-03-14 20:56:48 +01:00
providers.tf versions.tf: upgrade CloudFlare provider 2024-03-21 13:20:55 +01:00
secrets.tf versions.tf: upgrade CloudFlare provider 2024-03-21 13:20:55 +01:00
sepolia.tf drop statusim.net domain config in favor of status.im 2024-03-14 20:56:48 +01:00
users.tf
variables.tf drop statusim.net domain config in favor of status.im 2024-03-14 20:56:48 +01:00
versions.tf versions.tf: upgrade CloudFlare provider 2024-03-21 13:20:55 +01:00

README.md

Description

This repo defines Nimbus cluster infrastructure.

Endpoints

These are Beacon API endpoints intended for community testing.

Endpoint Host
http://unstable.mainnet.beacon-api.nimbus.team/ linux-01.ih-eu-mda1.nimbus.mainnet
http://testing.mainnet.beacon-api.nimbus.team/ linux-02.ih-eu-mda1.nimbus.mainnet
http://unstable.prater.beacon-api.nimbus.team/ linux-01.ih-eu-mda1.nimbus.prater
http://testing.prater.beacon-api.nimbus.team/ linux-02.he-eu-hel1.nimbus.prater
http://unstable.sepolia.beacon-api.nimbus.team/ linux-02.ih-eu-mda1.nimbus.prater
http://testing.holesky.beacon-api.nimbus.team/ geth-01.ih-eu-mda1.nimbus.holesky
http://unstable.holesky.beacon-api.nimbus.team/ geth-02.ih-eu-mda1.nimbus.holesky

These nodes have no validators attached.

There are also archives of ERA files:

Endpoint Host
https://mainnet.era.nimbus.team/ linux-03.ih-eu-mda1.nimbus.mainnet
https://prater.era.nimbus.team/ linux-01.ih-eu-mda1.nimbus.prater
https://sepolia.era.nimbus.team/ linux-01.ih-eu-mda1.sepolia.prater

Dashboards

There's a dedicated Kibana dashboard for Nimbus fleet logs: https://nimbus-logs.infra.status.im/

There are explorers available for various testnets:

Fleet Layouts

The fleet layout configuration used by Ansible can be found in ansible/vars/layout.

But for finding which host holds which validator use TSV files in ansible/files/layout.

Bootstrap Nodes

Some nodes in this repo are used as bootstrap nodes for testnets and mainnet.

Currently this includes:

Host IP
bootstrap-01.aws-eu-central-1a.nimbus.mainnet 3.120.104.18
bootstrap-02.aws-eu-central-1a.nimbus.mainnet 3.64.117.223

They are recorded in the eth2-networks repository.

Repo Usage

Simplest way to run commands on fleets if you have SSH access:

 > ./foreach.sh nimbus-mainnet-small "sudo systemctl --no-block restart 'build-beacon-node-*'"
stable-small-01.aws-eu-central-1a.nimbus.mainnet
stable-small-02.aws-eu-central-1a.nimbus.mainnet

For more details read the Infra Repo Usage doc.