json batch generation, kurtosis batch execution tools (#1)
* added CLI options with appropriate defaults * network generation done * Readme, refactoring and partition update * added gen_jsons.sh & updated the Readme * added custom main.star & updated the Readme * added run_kurtosis_tests.sh & updated the Readme
This commit is contained in:
parent
f34b3984c5
commit
ac4c245ef7
|
@ -0,0 +1,10 @@
|
|||
# gvim
|
||||
*.swp
|
||||
*.swo
|
||||
*~
|
||||
|
||||
# local json
|
||||
Topology.*
|
||||
topology.*
|
||||
*.json
|
||||
*.png
|
|
@ -0,0 +1,45 @@
|
|||
This repo contains the scripts to generate different network models for wakukurtosis runs.
|
||||
|
||||
## run_kurtosis_tests.sh
|
||||
run_kurtosis_tests.sh will run kurtosis on a set of json files in a directory. It requires two arguments. First is a directory containing json files; other file types in the directory are ignored. Second is the github root/prefix of the kurtosis module you run the tests under.</br>
|
||||
|
||||
> usage: ./run_kurtosis_tests.sh <input_dir> <repo_prefix> </br>
|
||||
|
||||
Running this script is somewhat complicated; so follow the following instructions to a dot. You **WILL** require the main.star provided here. The main.star reads a input json and instantiates Waku nodes accordingly. The runs are repeated for each of the input json files under the specified directory.
|
||||
|
||||
#### step 0)
|
||||
symlink run_kurtosis_tests.sh to the root directory of your kurtosis module.</br>
|
||||
#### step 1)
|
||||
backup the your kurtosis module's own main.star. copy the main.star provided here to the root directory of your kurtosis module.</br>
|
||||
!!! WARNING: symlinking the main.star will NOT work !!!</br>
|
||||
#### step 3)
|
||||
put all the json files you want to use in a directory. Call it *Foo*</br>
|
||||
#### step 3)
|
||||
copy the *Foo* directory to the root of your kurtosis module</br>
|
||||
!!! WARNING: symlinking the directory will NOT work !!!</br>
|
||||
#### step 4)
|
||||
run this script in the root directory of the kurtosis module. provide the directory (*Foo*) and the github root/prefix of the kurtosis module as arguments to the script</br>
|
||||
|
||||
|
||||
## gen_jsons.sh
|
||||
gen_jsons.sh can generate given number of Waku networs and outputs them to a directory. Please make sure that the output directory exists; both relative and absolute paths work. The Wakunode parameters are generated at random; edit the MIN and MAX for finer control. The script requires bc & /dev/urandom.<br>
|
||||
|
||||
> usage: ./gen_jsons.sh <output_dir> <#json files needed> </br>
|
||||
|
||||
## generate_network.py
|
||||
generate_network.py can generate networks with specified number of nodes and topics. the network types currently supported is "configuration_model" and more are on the way. Use with Python3.
|
||||
|
||||
> usage: generate_network [-h] [-o <file_name>] [-n <#nodes>] [-t <#topics>]
|
||||
[-T <type>] <br>
|
||||
>> </br>
|
||||
>> Generates and outputs the Waku network conforming to input parameters<//br>
|
||||
>> </br>
|
||||
>> optional arguments:</br>
|
||||
>>   -h, --help show this help message and exit</br>
|
||||
>>   -o <file_name>, --output <file_name> output json filename for the Waku network </br>
|
||||
>>   -n <#nodes>, --numnodes <#nodes> number of nodes in the Waku network </br>
|
||||
>>   -t <#topics>, --numtopics <#topics> number of topics in the Waku network </br>
|
||||
>>   -T <type>, --type <type> network type for the Waku network </br>
|
||||
>>   -p <#partitions>, --numparts <#partitions> number of partitions in the Waku network</br>
|
||||
>></br>
|
||||
>>The defaults are: -o "Topology.json"; -n 1; -t 1; -p 1; -T "configuration_model"</br>
|
|
@ -1,57 +0,0 @@
|
|||
import matplotlib.pyplot
|
||||
import networkx as nx
|
||||
import random
|
||||
import json
|
||||
|
||||
# Do we want a single graph, or we can have different hubs.
|
||||
# We don't have a power-law distribution, right?
|
||||
# sequence = nx.random_powerlaw_tree_sequence(10, tries=5000)
|
||||
|
||||
name = "waku_"
|
||||
node_number = 0
|
||||
|
||||
ports_shifted = 0
|
||||
|
||||
shared_topic = "test"
|
||||
nodes_to_instantiate = 50
|
||||
|
||||
data_to_dump = {}
|
||||
|
||||
degrees = [random.randint(1, 9) for i in range(nodes_to_instantiate)]
|
||||
|
||||
# Sanity check, as degrees must be even
|
||||
if (sum(degrees)) % 2 != 0:
|
||||
degrees[-1] += 1
|
||||
|
||||
# https://networkx.org/documentation/stable/reference/generated/networkx.generators.degree_seq.configuration_model.html
|
||||
G = nx.configuration_model(degrees)
|
||||
# Create it as a normal graph instead multigraph (without parallel edges)
|
||||
G = nx.Graph(G)
|
||||
# Removing self-loops
|
||||
G.remove_edges_from(nx.selfloop_edges(G))
|
||||
|
||||
mapping = {}
|
||||
for i in range(nodes_to_instantiate):
|
||||
mapping[i] = name + str(node_number)
|
||||
node_number += 1
|
||||
|
||||
# Labeling nodes to match waku containers
|
||||
H = nx.relabel_nodes(G, mapping)
|
||||
|
||||
# Add information to de data
|
||||
for node in H.nodes:
|
||||
data_to_dump[node] = {}
|
||||
data_to_dump[node]["ports-shift"] = ports_shifted
|
||||
ports_shifted += 1
|
||||
data_to_dump[node]["topics"] = shared_topic
|
||||
data_to_dump[node]["static-nodes"] = []
|
||||
for edge in H.edges(node):
|
||||
data_to_dump[node]["static-nodes"].append(edge[1])
|
||||
|
||||
|
||||
with open('topology.json', 'w') as f:
|
||||
json.dump(data_to_dump, f)
|
||||
|
||||
nx.draw(H, pos=nx.kamada_kawai_layout(H), with_labels=True)
|
||||
matplotlib.pyplot.show()
|
||||
matplotlib.pyplot.savefig("topology.png", format="PNG")
|
|
@ -0,0 +1,45 @@
|
|||
#!/bin/sh
|
||||
|
||||
#MAX and MIN for topics and num nodes
|
||||
MIN=5
|
||||
MAX=100
|
||||
|
||||
#requires bc
|
||||
getrand(){
|
||||
orig=$(od -An -N1 -i /dev/urandom)
|
||||
range=`echo "$MIN + ($orig % ($MAX - $MIN + 1))" | bc`
|
||||
RANDOM=$range
|
||||
}
|
||||
|
||||
getrand1(){
|
||||
orig=$(od -An -N1 -i /dev/urandom)
|
||||
range=`echo "$MIN + ($orig % ($MAX - $MIN + 1))" | bc`
|
||||
return range
|
||||
#getrand1 # call the fun and use the return value
|
||||
#n=$?
|
||||
}
|
||||
|
||||
if [ "$#" -ne 2 ] || [ $2 -le 0 ] || ! [ -d "$1" ]; then
|
||||
echo "usage: $0 <output dir> <#json files needed>" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
path=$1
|
||||
nfiles=$2
|
||||
|
||||
echo "Ok, will generate $nfiles networks & put them under '$path'."
|
||||
|
||||
prefix=$path"/WakuNet_"
|
||||
suffix=".json"
|
||||
|
||||
for i in $(seq $nfiles)
|
||||
do
|
||||
getrand
|
||||
n=$((RANDOM+1))
|
||||
getrand
|
||||
t=$((RANDOM+1))
|
||||
fname=$prefix$i$suffix
|
||||
nwtype="configuration_model"
|
||||
$(./generate_network.py -n $n -t $t -T $nwtype -o $fname)
|
||||
echo "#$i\tn=$n\tt=$t\tT=$nwtype\to=$fname"
|
||||
done
|
|
@ -0,0 +1,155 @@
|
|||
#! /usr/bin/env python3
|
||||
|
||||
import matplotlib.pyplot as mp
|
||||
import networkx as nx
|
||||
import networkx.readwrite.json_graph
|
||||
import random, math
|
||||
import json
|
||||
import argparse,sys
|
||||
|
||||
def write_json(filename, data_2_dump):
|
||||
json.dump(data_2_dump, open(filename,'w'), indent=2)
|
||||
|
||||
# has trouble with non-integer/non-hashable keys
|
||||
def read_json(filename):
|
||||
with open(filename) as f:
|
||||
jdata = json.load(f)
|
||||
return nx.node_link_graph(jdata)
|
||||
|
||||
def draw(H):
|
||||
nx.draw(H, pos=nx.kamada_kawai_layout(H), with_labels=True)
|
||||
mp.show()
|
||||
mp.savefig("topology.png", format="PNG")
|
||||
|
||||
def init_arg_parser() :
|
||||
# Initialize parser, add arguments and set the defaults
|
||||
parser = argparse.ArgumentParser(
|
||||
prog = 'generate_network',
|
||||
description = '''Generates and outputs
|
||||
the Waku network conforming to input parameters''',
|
||||
epilog = '''The defaults are: -o "Topology.json";
|
||||
-n 1; -t 1; -p 1; -T "configuration_model"''')
|
||||
parser.add_argument("-o", "--output",
|
||||
default='Topology.json', dest='fname',
|
||||
help='output json filename for the Waku network',
|
||||
type=str, metavar='<file_name>')
|
||||
parser.add_argument("-n", "--numnodes",
|
||||
default=1, dest='num_nodes',
|
||||
help='number of nodes in the Waku network',
|
||||
type=int, metavar='<#nodes>')
|
||||
parser.add_argument("-t", "--numtopics",
|
||||
default=1, dest='num_topics',
|
||||
help='number of topics in the Waku network',
|
||||
type=int, metavar='<#topics>')
|
||||
parser.add_argument("-T", "--type",
|
||||
default="configuration_model", dest='nw_type',
|
||||
help='network type of the Waku network',
|
||||
type=str, metavar='<type>')
|
||||
parser.add_argument("-p", "--numparts",
|
||||
default=1, dest='num_partitions',
|
||||
help='The number of partitions in the Waku network',
|
||||
type=int, metavar='<#partitions>')
|
||||
# parser.add_argument("-e", "--numedges",
|
||||
# default=1, dest='num_edges',
|
||||
# help='The number of edges in the Waku network',
|
||||
# type=int, metavar='#edges>')
|
||||
return parser
|
||||
|
||||
# https://networkx.org/documentation/stable/reference/generated/networkx.generators.degree_seq.configuration_model.html
|
||||
def generate_config_model(n):
|
||||
#degrees = nx.random_powerlaw_tree_sequence(n, tries=10000)
|
||||
degrees = [random.randint(1, n) for i in range(n)]
|
||||
if (sum(degrees)) % 2 != 0: # adjust the degree sum to be even
|
||||
degrees[-1] += 1
|
||||
G = nx.configuration_model(degrees) # generate the graph
|
||||
return G
|
||||
|
||||
def generate_topic_string(n):
|
||||
rs = ""
|
||||
for _ in range(n):
|
||||
r = random.randint(65, 65 + 26 - 1) # only letters
|
||||
rs += (chr(r)) # append the char generated
|
||||
return rs
|
||||
|
||||
def generate_topics(num_topics):
|
||||
# generate the topics - uppercase chars prefixed by "topic"
|
||||
topics = []
|
||||
base = 26
|
||||
topic_len = int(math.log(num_topics)/math.log(base)) + 1
|
||||
topics = {}
|
||||
for i in range(num_topics):
|
||||
topics[i] = "topic_" + generate_topic_string(topic_len)
|
||||
return topics
|
||||
|
||||
def get_random_sublist(topics):
|
||||
n = len(topics)
|
||||
lo = random.randint(0, n - 1)
|
||||
hi = random.randint(lo + 1, n)
|
||||
sublist = []
|
||||
for i in range(lo, hi):
|
||||
sublist.append(topics[i])
|
||||
return sublist
|
||||
|
||||
def generate_network(num_nodes, prefix):
|
||||
G = nx.empty_graph()
|
||||
if nw_type == "configuration_model":
|
||||
G = generate_config_model(num_nodes)
|
||||
else:
|
||||
print(nw_type +": Unsupported network type")
|
||||
sys.exit(1)
|
||||
H = postprocess_network(G, prefix)
|
||||
return H
|
||||
|
||||
# used by generate_dump_data - *ought* to be global for handling partitions
|
||||
ports_shifted = 0
|
||||
|
||||
def postprocess_network(G, prefix):
|
||||
G = nx.Graph(G) # prune out parallel/multi edges
|
||||
G.remove_edges_from(nx.selfloop_edges(G)) # Removing self-loops
|
||||
# Labeling nodes to match waku containers
|
||||
mapping = {}
|
||||
for i in range(num_nodes):
|
||||
mapping[i] = prefix + str(i)
|
||||
return nx.relabel_nodes(G, mapping)
|
||||
|
||||
def generate_dump_data(H, topics):
|
||||
data_to_dump = {}
|
||||
global ports_shifted
|
||||
for node in H.nodes:
|
||||
data_to_dump[node] = {}
|
||||
data_to_dump[node]["ports-shift"] = ports_shifted
|
||||
ports_shifted += 1
|
||||
data_to_dump[node]["topics"] = get_random_sublist(topics)
|
||||
data_to_dump[node]["static-nodes"] = []
|
||||
for edge in H.edges(node):
|
||||
data_to_dump[node]["static-nodes"].append(edge[1])
|
||||
return data_to_dump
|
||||
|
||||
#extract the CLI arguments
|
||||
args = init_arg_parser().parse_args()
|
||||
|
||||
#parameters to generate the network
|
||||
fname = args.fname
|
||||
num_nodes = args.num_nodes
|
||||
num_topics = args.num_topics
|
||||
nw_type = args.nw_type
|
||||
prefix = "waku_"
|
||||
num_partitions = args.num_partitions
|
||||
#num_edges = args.num_edges ## do we need to control #edges?
|
||||
|
||||
if num_partitions > 1 :
|
||||
print("-p",num_partitions, ": Sorry, we do not yet support partitions")
|
||||
sys.exit(1)
|
||||
|
||||
# Generate the network and postprocess it
|
||||
H = generate_network(num_nodes, prefix)
|
||||
|
||||
#generate the topics
|
||||
topics = generate_topics(num_topics)
|
||||
|
||||
# Generate the dump data
|
||||
dump_data = generate_dump_data(H, topics)
|
||||
|
||||
# dump the network to the json file
|
||||
write_json(fname, dump_data)
|
||||
#draw(H)
|
|
@ -0,0 +1,41 @@
|
|||
IMAGE_NAME = "statusteam/nim-waku:deploy-status-prod"
|
||||
|
||||
# Waku RPC Port
|
||||
RPC_PORT_ID = "rpc"
|
||||
RPC_TCP_PORT = 8545
|
||||
|
||||
# Waku Matrics Port
|
||||
PROMETHEUS_PORT_ID = "prometheus"
|
||||
PROMETHEUS_TCP_PORT = 8008
|
||||
|
||||
GET_WAKU_INFO_METHOD = "get_waku_v2_debug_v1_info"
|
||||
CONNECT_TO_PEER_METHOD = "post_waku_v2_admin_v1_peers"
|
||||
|
||||
def run(args):
|
||||
# in case u want to run each json separately, follow this cmd line arg format for main.star
|
||||
#kurtosis run . --args '{"json_nw_name": "github.com/user/kurto-module/json_dir/abc.json"}'
|
||||
json_loc=args.json_nw_name
|
||||
file_contents = read_file(json_loc)
|
||||
#print(file_contents)
|
||||
decoded = json.decode(file_contents)
|
||||
services ={}
|
||||
|
||||
# Get up all waku nodes
|
||||
for wakunode_name in decoded.keys():
|
||||
waku_service = add_service(
|
||||
service_id=wakunode_name,
|
||||
config=struct(
|
||||
image=IMAGE_NAME,
|
||||
ports={
|
||||
RPC_PORT_ID: struct(number=RPC_TCP_PORT, protocol="TCP"),
|
||||
PROMETHEUS_PORT_ID: struct(number=PROMETHEUS_TCP_PORT, protocol="TCP")
|
||||
},
|
||||
entrypoint=[
|
||||
"/usr/bin/wakunode", "--rpc-address=0.0.0.0", "--metrics-server-address=0.0.0.0"
|
||||
],
|
||||
cmd=[
|
||||
"--topics='" + " ".join(decoded[wakunode_name]["topics"]) + "'", '--rpc-admin=true', '--keep-alive=true', '--metrics-server=true',
|
||||
]
|
||||
)
|
||||
)
|
||||
services[wakunode_name] = waku_service
|
|
@ -0,0 +1,35 @@
|
|||
#!/bin/sh
|
||||
|
||||
# -> symlink - ln -s source/dir .
|
||||
|
||||
# step 0)
|
||||
# symlink this script and the main.star to the root of ur kurtosis module.
|
||||
#
|
||||
# step 1)
|
||||
# put the json files you want to run kurtosis in a directory
|
||||
#
|
||||
# step 2)
|
||||
# copy that entire directory to the root of your kurtosis module
|
||||
# !!! WARNING: symlinking the directory will NOT work !!!
|
||||
#
|
||||
# step 3)
|
||||
# run this script in the kurtosis module root dir of ur module
|
||||
|
||||
|
||||
if [ "$#" -ne 2 ] || ! [ -d "$1" ]; then
|
||||
echo "usage: $0 <input dir> <repo prefix>" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
path=$1
|
||||
repo=$2
|
||||
echo "Ok, will run kurtosis on all .json networks under '$path'."
|
||||
|
||||
for json in "$path"/*.json
|
||||
do
|
||||
cmd="kurtosis run . --args '{\"json_nw_name\": \"$repo/$json\"}'"
|
||||
echo $cmd
|
||||
eval $cmd
|
||||
done
|
||||
|
||||
echo $repo, $path, "DONE!"
|
Loading…
Reference in New Issue