mirror of
https://github.com/waku-org/waku-interop-tests.git
synced 2025-02-14 17:39:44 +00:00
CI_RUNNERS (#88)
* Change number of threads for CI runners to 8 * change number of threads to 12 * Change number of threads to auto select * change number of threads to logical instead of auto * change number of threads to 150 * change total workers to 40 * Adding workflow_call in on section * remove "remove_unwanted_software" from yml * change path of requirements.txt * modify path of .txt file again * Change repo name * Create docker volume * Merge master to branch * Revert changes done in the branch * try the sharding option * Add pytest-shard 0.1.2 to requirements.txt * reduce shards to 2 * Fix max number of shards error * Modify pytest run command * change number of shards to =1 * increase shards to 4 * Modify pytest command * Change shards to 4 * skip 3 tests * skip rln tests * skip test metric * skip rln tests * fix skipif mark * Fix linters * Fix linters 2 * run pre-commit command to fix linters * Make each shard upload seperate report using artifacts * Change number of shards to 5 * CHange artifacts version to 4 * increase shards to 8 * Increase shards to 11 * Make test_get_multiple_2000_store_messages run in seperate shard * Mark test_get_multiple_2000_store_messages to run in shard 1 * using logic in yml file to run test_cursor.py in seperate file * Fix logic to run test_cursor.py in seperate shard * Adding path of file instead of file name in yml file * Fix error in pytest command * rerun test test_get_multiple_2000_store_messages * run test_get_multiple_2000_store_messages in separate shard * Fix error in pytest command * Fix command again by using -k instead of test name * Add test_rln.py again to job and increase shards to 13 * Run test_rln.py in single shard * Fix pytest command * Fix syntax error in pytest command * Increase workers to 4 * Create new test file for test_get_multiple_2000_store_messages * Collect reports into 1 report * Modify aggregate reports * Make changes to reports collecting * Add more reporting ways * Add send reports to discord again * Fix command syntax error * Revert changes * Make changes to fix not executed tests issue * remove 12 from matrix shards * Try to fix missing test issue by adding collect-only * Modify pytest command " remove collect only " * Increate timeout for test test_get_multiple_2000_store_messages * Reduce shards again to 8 * remove loadfile option * Increase shards to 22 * increase shards to 42 * Increase shards to 49 * Increase shards to 63 * Modify test command to have 16 shards * Change shards to 9 * Fix command of pytest * Using ignore instead of -m * Fix syntax error * Modify test file path * Increase shards to 16 * Modify test command * fix: add multiple machines * fix: prevent fail fast * Remove multiple skips * Revert changes in test_rln file * Modify test command fix: add multiple machines fix: prevent fail fast checkout on smoke_tests tag (#96) * checkout on smoke_tests tag * MOdify pytest command * Update README.md Add steps on how to use new tag for PR tests in readme file Remove multiple skips Revert changes in test_rln file * Adding timeout to test test_on_empty_postgress_db * Add comments in workflow for shards --------- Co-authored-by: fbarbu15 <florin@status.im> Co-authored-by: Florin Barbu <barbu_florin_adrian@yahoo.com>
This commit is contained in:
parent
1f853b3a11
commit
1a981a16e4
150
.github/workflows/test_common.yml
vendored
150
.github/workflows/test_common.yml
vendored
@ -35,6 +35,15 @@ jobs:
|
||||
|
||||
tests:
|
||||
name: tests
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
shard: [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17]
|
||||
# total number of shards =18 means tests will split into 18 thread and run in parallel to increase execution speed
|
||||
# command for sharding :
|
||||
# pytest --shard-id=<shard_number> --num-shards=<total_shards>
|
||||
# shard 16 for test_rln.py file as they shall run sequentially
|
||||
# shard 17 for test_cursor_many_msgs.py as it takes time >7 mins
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 120
|
||||
steps:
|
||||
@ -52,64 +61,91 @@ jobs:
|
||||
- run: pip install -r requirements.txt
|
||||
|
||||
- name: Run tests
|
||||
|
||||
run: |
|
||||
pytest -n 4 --dist loadgroup --reruns 2 --alluredir=allure-results
|
||||
|
||||
- name: Get allure history
|
||||
if: always()
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
ref: gh-pages
|
||||
path: gh-pages
|
||||
|
||||
- name: Setup allure report
|
||||
uses: simple-elf/allure-report-action@master
|
||||
if: always()
|
||||
id: allure-report
|
||||
with:
|
||||
allure_results: allure-results
|
||||
gh_pages: gh-pages/${{ env.CALLER }}
|
||||
allure_history: allure-history
|
||||
keep_reports: 30
|
||||
report_url: https://waku-org.github.io/waku-interop-tests/${{ env.CALLER }}
|
||||
|
||||
- name: Deploy report to Github Pages
|
||||
uses: peaceiris/actions-gh-pages@v3
|
||||
if: always()
|
||||
with:
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
publish_branch: gh-pages
|
||||
publish_dir: allure-history
|
||||
destination_dir: ${{ env.CALLER }}
|
||||
|
||||
- name: Create job summary
|
||||
if: always()
|
||||
env:
|
||||
JOB_STATUS: ${{ job.status }}
|
||||
run: |
|
||||
echo "## Run Information" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- **Event**: ${{ github.event_name }}" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- **Actor**: ${{ github.actor }}" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- **Node1**: ${{ env.NODE_1 }}" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- **Node2**: ${{ env.NODE_2 }}" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- **Additonal Nodes**: ${{ env.ADDITIONAL_NODES }}" >> $GITHUB_STEP_SUMMARY
|
||||
echo "## Test Results" >> $GITHUB_STEP_SUMMARY
|
||||
echo "Allure report will be available at: https://waku-org.github.io/waku-interop-tests/${{ env.CALLER }}/${{ github.run_number }}" >> $GITHUB_STEP_SUMMARY
|
||||
if [ "$JOB_STATUS" != "success" ]; then
|
||||
echo "There are failures with nwaku node. cc <@&1111608257824440330>" >> $GITHUB_STEP_SUMMARY
|
||||
if [ "${{ matrix.shard }}" == "16" ]; then
|
||||
pytest tests/relay/test_rln.py --alluredir=allure-results-${{ matrix.shard }}
|
||||
elif [ "${{ matrix.shard }}" == "17" ]; then
|
||||
pytest tests/store/test_cursor_many_msgs.py --alluredir=allure-results-${{ matrix.shard }}
|
||||
elif [ "${{ matrix.shard }}" != "17" ]; then
|
||||
pytest --ignore=tests/relay/test_rln.py --ignore=tests/store/test_cursor_many_msgs.py --reruns 2 --shard-id=${{ matrix.shard }} --num-shards=16 --alluredir=allure-results-${{ matrix.shard }}
|
||||
fi
|
||||
{
|
||||
echo 'JOB_SUMMARY<<EOF'
|
||||
cat $GITHUB_STEP_SUMMARY
|
||||
echo EOF
|
||||
} >> $GITHUB_ENV
|
||||
|
||||
- name: Send report to Discord
|
||||
uses: rjstone/discord-webhook-notify@v1
|
||||
if: always() && env.CALLER != 'manual'
|
||||
- name: Upload allure results
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
severity: ${{ job.status == 'success' && 'info' || 'error' }}
|
||||
username: ${{ github.workflow }}
|
||||
description: "## Job Result: ${{ job.status }}"
|
||||
details: ${{ env.JOB_SUMMARY }}
|
||||
webhookUrl: ${{ secrets.DISCORD_TEST_REPORTS_WH }}
|
||||
name: allure-results-${{ matrix.shard }}
|
||||
path: allure-results-${{ matrix.shard }}
|
||||
|
||||
|
||||
|
||||
aggregate-reports:
|
||||
runs-on: ubuntu-latest
|
||||
needs: tests
|
||||
if: always()
|
||||
steps:
|
||||
- name: Download all allure results
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
path: all-results
|
||||
merge-multiple: true
|
||||
|
||||
- name: Get allure history
|
||||
if: always()
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
ref: gh-pages
|
||||
path: gh-pages
|
||||
|
||||
- name: Setup allure report
|
||||
uses: simple-elf/allure-report-action@master
|
||||
if: always()
|
||||
id: allure-report
|
||||
with:
|
||||
allure_results: all-results
|
||||
gh_pages: gh-pages/${{ env.CALLER }}
|
||||
allure_history: allure-history
|
||||
keep_reports: 30
|
||||
report_url: https://waku-org.github.io/waku-interop-tests/${{ env.CALLER }}
|
||||
|
||||
- name: Deploy report to Github Pages
|
||||
uses: peaceiris/actions-gh-pages@v3
|
||||
if: always()
|
||||
with:
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
publish_branch: gh-pages
|
||||
publish_dir: allure-history
|
||||
destination_dir: ${{ env.CALLER }}
|
||||
|
||||
- name: Create job summary
|
||||
if: always()
|
||||
env:
|
||||
JOB_STATUS: ${{ job.status }}
|
||||
run: |
|
||||
echo "## Run Information" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- **Event**: ${{ github.event_name }}" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- **Actor**: ${{ github.actor }}" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- **Node1**: ${{ env.NODE_1 }}" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- **Node2**: ${{ env.NODE_2 }}" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- **Additonal Nodes**: ${{ env.ADDITIONAL_NODES }}" >> $GITHUB_STEP_SUMMARY
|
||||
echo "## Test Results" >> $GITHUB_STEP_SUMMARY
|
||||
echo "Allure report will be available at: https://waku-org.github.io/waku-interop-tests/${{ env.CALLER }}/${{ github.run_number }}" >> $GITHUB_STEP_SUMMARY
|
||||
if [ "$JOB_STATUS" != "success" ]; then
|
||||
echo "There are failures with nwaku node. cc <@&1111608257824440330>" >> $GITHUB_STEP_SUMMARY
|
||||
fi
|
||||
{
|
||||
echo 'JOB_SUMMARY<<EOF'
|
||||
cat $GITHUB_STEP_SUMMARY
|
||||
echo EOF
|
||||
} >> $GITHUB_ENV
|
||||
|
||||
- name: Send report to Discord
|
||||
uses: rjstone/discord-webhook-notify@v1
|
||||
if: always() && env.CALLER != 'manual'
|
||||
with:
|
||||
severity: ${{ job.status == 'success' && 'info' || 'error' }}
|
||||
username: ${{ github.workflow }}
|
||||
description: "## Job Result: ${{ job.status }}"
|
||||
details: ${{ env.JOB_SUMMARY }}
|
||||
webhookUrl: ${{ secrets.DISCORD_TEST_REPORTS_WH }}
|
@ -39,3 +39,4 @@ typing-inspect==0.9.0
|
||||
typing_extensions==4.9.0
|
||||
urllib3==2.2.2
|
||||
virtualenv==20.25.0
|
||||
pytest-shard==0.1.2
|
||||
|
@ -10,22 +10,6 @@ from src.steps.store import StepsStore
|
||||
class TestCursor(StepsStore):
|
||||
# we implicitly test the reusabilty of the cursor for multiple nodes
|
||||
|
||||
def test_get_multiple_2000_store_messages(self):
|
||||
expected_message_hash_list = []
|
||||
for i in range(2000):
|
||||
message = self.create_message(payload=to_base64(f"Message_{i}"))
|
||||
self.publish_message(message=message)
|
||||
expected_message_hash_list.append(self.compute_message_hash(self.test_pubsub_topic, message))
|
||||
store_response = StoreResponse({"paginationCursor": "", "pagination_cursor": ""}, self.store_node1)
|
||||
response_message_hash_list = []
|
||||
while store_response.pagination_cursor is not None:
|
||||
cursor = store_response.pagination_cursor
|
||||
store_response = self.get_messages_from_store(self.store_node1, page_size=100, cursor=cursor)
|
||||
for index in range(len(store_response.messages)):
|
||||
response_message_hash_list.append(store_response.message_hash(index))
|
||||
assert len(expected_message_hash_list) == len(response_message_hash_list), "Message count mismatch"
|
||||
assert expected_message_hash_list == response_message_hash_list, "Message hash mismatch"
|
||||
|
||||
@pytest.mark.parametrize("cursor_index, message_count", [[2, 4], [3, 20], [10, 40], [19, 20], [19, 50], [110, 120]])
|
||||
def test_different_cursor_and_indexes(self, cursor_index, message_count):
|
||||
message_hash_list = []
|
||||
|
29
tests/store/test_cursor_many_msgs.py
Normal file
29
tests/store/test_cursor_many_msgs.py
Normal file
@ -0,0 +1,29 @@
|
||||
import pytest
|
||||
from src.env_vars import NODE_1, NODE_2
|
||||
from src.libs.common import to_base64
|
||||
from src.node.store_response import StoreResponse
|
||||
from src.steps.store import StepsStore
|
||||
|
||||
|
||||
@pytest.mark.xfail("go-waku" in NODE_2, reason="Bug reported: https://github.com/waku-org/go-waku/issues/1109")
|
||||
@pytest.mark.usefixtures("node_setup")
|
||||
class TestCursorManyMessages(StepsStore):
|
||||
# we implicitly test the reusabilty of the cursor for multiple nodes
|
||||
|
||||
@pytest.mark.timeout(540)
|
||||
@pytest.mark.store2000
|
||||
def test_get_multiple_2000_store_messages(self):
|
||||
expected_message_hash_list = []
|
||||
for i in range(2000):
|
||||
message = self.create_message(payload=to_base64(f"Message_{i}"))
|
||||
self.publish_message(message=message)
|
||||
expected_message_hash_list.append(self.compute_message_hash(self.test_pubsub_topic, message))
|
||||
store_response = StoreResponse({"paginationCursor": "", "pagination_cursor": ""}, self.store_node1)
|
||||
response_message_hash_list = []
|
||||
while store_response.pagination_cursor is not None:
|
||||
cursor = store_response.pagination_cursor
|
||||
store_response = self.get_messages_from_store(self.store_node1, page_size=100, cursor=cursor)
|
||||
for index in range(len(store_response.messages)):
|
||||
response_message_hash_list.append(store_response.message_hash(index))
|
||||
assert len(expected_message_hash_list) == len(response_message_hash_list), "Message count mismatch"
|
||||
assert expected_message_hash_list == response_message_hash_list, "Message hash mismatch"
|
@ -16,6 +16,7 @@ class TestExternalDb(StepsStore):
|
||||
self.subscribe_to_pubsub_topics_via_relay()
|
||||
|
||||
@pytest.mark.dependency(name="test_on_empty_postgress_db")
|
||||
@pytest.mark.timeout(60)
|
||||
def test_on_empty_postgress_db(self):
|
||||
message = self.create_message()
|
||||
self.publish_message(message=message)
|
||||
|
Loading…
x
Reference in New Issue
Block a user