nim-dagger/.github/workflows
Slava 921159f87f
Run release tests for docker images (#1017)
* Refactor Docker reusable workflow and add release tests support

https://github.com/codex-storage/cs-codex-dist-tests/issues/108

* Add an option to run release tests for Docker images

https://github.com/codex-storage/cs-codex-dist-tests/issues/108

* Use bigger instance for arm builds

https://github.com/codex-storage/cs-codex-dist-tests/issues/108

* Pass repository and branch to release tests workflow

https://github.com/codex-storage/cs-codex-dist-tests/issues/108

* Do not use computation job because run_release_tests is a string

https://github.com/codex-storage/cs-codex-dist-tests/issues/108
2024-12-04 11:52:51 +00:00
..
Readme.md Update links to codex-storage organization (#420) 2023-05-23 23:01:13 +03:00
ci-reusable.yml Fix concurrency issues (#993) 2024-11-25 11:23:04 +00:00
ci.yml ci: split linux and macos tests (#997) 2024-11-22 12:05:00 +00:00
docker-dist-tests.yml Run release tests for docker images (#1017) 2024-12-04 11:52:51 +00:00
docker-reusable.yml Run release tests for docker images (#1017) 2024-12-04 11:52:51 +00:00
docker.yml Use latest tag for Tags and Default branch only (#540) 2023-08-30 13:09:52 +03:00
docs.yml Build Postman Collection (#973) 2024-10-28 13:53:41 +00:00
nim-matrix.yml ci: linux ci runs on ubuntu-20.04 (#953) 2024-10-14 11:24:53 +00:00
release.yml ci: use rust 1.7.9 for release workflow and dockerfile (#999) 2024-11-25 16:13:14 +00:00

Readme.md

Tips for shorter build times

Runner availability

Currently, the biggest bottleneck when optimizing workflows is the availability of Windows and macOS runners. Therefore, anything that reduces the time spent in Windows or macOS jobs will have a positive impact on the time waiting for runners to become available. The usage limits for Github Actions are described here. You can see a breakdown of runner usage for your jobs in the Github Actions tab (example).

Windows is slow

Performing git operations and compilation are both slow on Windows. This can easily mean that a Windows job takes twice as long as a Linux job. Therefore it makes sense to use a Windows runner only for testing Windows compatibility, and nothing else. Testing compatibility with other versions of Nim, code coverage analysis, etc. are therefore better performed on a Linux runner.

Parallelization

Breaking up a long build job into several jobs that you run in parallel can have a positive impact on the wall clock time that a workflow runs. For instance, you might consider running unit tests and integration tests in parallel. Keep in mind however that availability of macOS and Windows runners is the biggest bottleneck. If you split a Windows job into two jobs, you now need to wait for two Windows runners to become available! Therefore parallelization often only makes sense for Linux jobs.

Refactoring

As with any code, complex workflows are hard to read and change. You can use composite actions and reusable workflows to refactor complex workflows.

Steps for measuring time

Breaking up steps allows you to see the time spent in each part. For instance, instead of having one step where all tests are performed, you might consider having separate steps for e.g. unit tests and integration tests, so that you can see how much time is spent in each.

Fix slow tests

Try to avoid slow unit tests. They not only slow down continuous integration, but also local development. If you encounter slow tests you can consider reworking them to stub out the slow parts that are not under test, or use smaller data structures for the test.

You can use unittest2 together with the environment variable NIMTEST_TIMING=true to show how much time is spent in every test (reference).

Caching

Ensure that caches are updated over time. For instance if you cache the latest version of the Nim compiler, then you want to update the cache when a new version of the compiler is released. See also the documentation for the cache action.

Fail fast

By default a workflow fails fast: if one job fails, the rest are cancelled. This might seem inconvenient, because when you're debugging an issue you often want to know whether you introduced a failure on all platforms, or only on a single one. You might be tempted to disable fail-fast, but keep in mind that this keeps runners busy for longer on a workflow that you know is going to fail anyway. Consequent runs will therefore take longer to start. Fail fast is most likely better for overall development speed.