sourcecred/config/test.js

231 lines
6.0 KiB
JavaScript
Raw Normal View History

2018-05-02 23:10:03 +00:00
// @flow
/*:: import type {Task} from "../src/tools/execDependencyGraph"; */
const tmp = require("tmp");
const execDependencyGraph = require("../src/tools/execDependencyGraph");
2018-05-02 23:10:03 +00:00
main();
2018-05-02 23:10:03 +00:00
function main() {
const options = parseArgs();
if (isForkedPrFullRun(options)) {
printForkedPrFullRunErrorMessage();
process.exitCode = 1;
return;
}
Make `yarn test` more quiet (#1037) This commit adds a new runOption for execDependencyGraph, namely `printVerboseResults`. If this flag is true, then execDependencyGraph will print a "Full Results" section along with the standard error and standard out of every task, regardless of whether it failed or succeeded. (Note, this is the existing behavior for all invocations prior to this commit). If the flag is not true, then execDependencyGraph will not print a full results section, and stdout/stderr will be logged only for tasks that fail. This commit also modifies `yarn test` to use the new flag so that it prints verbose tests only when the `--full` option is provided. This is consistent with our sharness behavior: we print the full sharness logs only when `--full` was provided. This fixes #1035, and ensures that running `yarn test` has a high signal to noise ratio (i.e. it only shows an enumeration of top level tasks). This improves the developer ergonomics of SourceCred by not having a super commonly used and core script spam the user with mostly irrelevant information. Test plan: Run `yarn test` when all tests are passing, and observe that the output has much less noise: ``` yarn run v1.12.3 $ node ./config/test.js tmpdir for backend output: /tmp/sourcecred-test-6337SZ9smvWsWvqE Starting tasks GO ensure-flow-typing GO check-stopships GO check-pretty GO lint GO flow GO unit GO backend PASS check-stopships PASS ensure-flow-typing PASS flow PASS backend GO sharness PASS sharness PASS check-pretty PASS lint PASS unit Overview Final result: SUCCESS Done in 11.66s. ``` Run `yarn test` when there is a real failure (e.g. a unit test failure) and observe that full details on the failure, including the output from stdout/stderr, is still provided. Run `yarn test --full` and observe that full, verbose logs are provided.
2019-01-06 02:16:29 +00:00
const printVerboseResults = options.mode === "FULL";
const runOptions = {printVerboseResults};
Fix flakey CI memory issues (#1230) Ever since I upgraded all of the dependencies, we've been having regular CI failures, which seem to share a common root cause of memory exhaustion. Here are some examples: [1], [2]. [1]: https://circleci.com/gh/sourcecred/sourcecred/1246 [2]: https://circleci.com/gh/sourcecred/sourcecred/1239 After some experimentation, I've found that we can solve the issue by ensuring that jest runs on its own in CI, so that it doesn't contend with other tests for memories. Also, I reduce its max workers to 2, which matches the number of CPUs in the CircleCI containers. Unfortunately, this does increase our build time. The postcommit (non full) test now takes 45-60s (up from 30-50s), and the full test is also a little slower. However, building in about one minute is still acceptably fast, and having regular flakey test failures is not acceptable, so this is still a win. If we want to improve on this in the future, we should look into the git shells getting spawned in `config/env.js`. I noticed that they were often involved in the out-of-memory failures. Also, I modified `.circleci/config.yml` so that any branch matching the regular expression `/ci-.*/` will trigger a full build. That makes it easier to test against CI failures. Test plan: I ran about ~10 full builds with this change, and more with similar variations, and they all passed. Verify that the full builds that are run for this commit also all pass! Also, verify that running yarn test locally has unchanged behavior, and running `yarn test --ci` locally lets jest run to completion before running any other test.
2019-07-16 00:51:14 +00:00
const tasks = makeTasks(options.mode, options.limitMemoryUsage);
Make `yarn test` more quiet (#1037) This commit adds a new runOption for execDependencyGraph, namely `printVerboseResults`. If this flag is true, then execDependencyGraph will print a "Full Results" section along with the standard error and standard out of every task, regardless of whether it failed or succeeded. (Note, this is the existing behavior for all invocations prior to this commit). If the flag is not true, then execDependencyGraph will not print a full results section, and stdout/stderr will be logged only for tasks that fail. This commit also modifies `yarn test` to use the new flag so that it prints verbose tests only when the `--full` option is provided. This is consistent with our sharness behavior: we print the full sharness logs only when `--full` was provided. This fixes #1035, and ensures that running `yarn test` has a high signal to noise ratio (i.e. it only shows an enumeration of top level tasks). This improves the developer ergonomics of SourceCred by not having a super commonly used and core script spam the user with mostly irrelevant information. Test plan: Run `yarn test` when all tests are passing, and observe that the output has much less noise: ``` yarn run v1.12.3 $ node ./config/test.js tmpdir for backend output: /tmp/sourcecred-test-6337SZ9smvWsWvqE Starting tasks GO ensure-flow-typing GO check-stopships GO check-pretty GO lint GO flow GO unit GO backend PASS check-stopships PASS ensure-flow-typing PASS flow PASS backend GO sharness PASS sharness PASS check-pretty PASS lint PASS unit Overview Final result: SUCCESS Done in 11.66s. ``` Run `yarn test` when there is a real failure (e.g. a unit test failure) and observe that full details on the failure, including the output from stdout/stderr, is still provided. Run `yarn test --full` and observe that full, verbose logs are provided.
2019-01-06 02:16:29 +00:00
execDependencyGraph(tasks, runOptions).then(({success}) => {
process.exitCode = success ? 0 : 1;
2018-05-02 23:10:03 +00:00
});
}
function parseArgs() {
Fix flakey CI memory issues (#1230) Ever since I upgraded all of the dependencies, we've been having regular CI failures, which seem to share a common root cause of memory exhaustion. Here are some examples: [1], [2]. [1]: https://circleci.com/gh/sourcecred/sourcecred/1246 [2]: https://circleci.com/gh/sourcecred/sourcecred/1239 After some experimentation, I've found that we can solve the issue by ensuring that jest runs on its own in CI, so that it doesn't contend with other tests for memories. Also, I reduce its max workers to 2, which matches the number of CPUs in the CircleCI containers. Unfortunately, this does increase our build time. The postcommit (non full) test now takes 45-60s (up from 30-50s), and the full test is also a little slower. However, building in about one minute is still acceptably fast, and having regular flakey test failures is not acceptable, so this is still a win. If we want to improve on this in the future, we should look into the git shells getting spawned in `config/env.js`. I noticed that they were often involved in the out-of-memory failures. Also, I modified `.circleci/config.yml` so that any branch matching the regular expression `/ci-.*/` will trigger a full build. That makes it easier to test against CI failures. Test plan: I ran about ~10 full builds with this change, and more with similar variations, and they all passed. Verify that the full builds that are run for this commit also all pass! Also, verify that running yarn test locally has unchanged behavior, and running `yarn test --ci` locally lets jest run to completion before running any other test.
2019-07-16 00:51:14 +00:00
const options = {mode: "BASIC", limitMemoryUsage: false};
const args = process.argv.slice(2);
for (const arg of args) {
if (arg === "--full") {
options.mode = "FULL";
Fix flakey CI memory issues (#1230) Ever since I upgraded all of the dependencies, we've been having regular CI failures, which seem to share a common root cause of memory exhaustion. Here are some examples: [1], [2]. [1]: https://circleci.com/gh/sourcecred/sourcecred/1246 [2]: https://circleci.com/gh/sourcecred/sourcecred/1239 After some experimentation, I've found that we can solve the issue by ensuring that jest runs on its own in CI, so that it doesn't contend with other tests for memories. Also, I reduce its max workers to 2, which matches the number of CPUs in the CircleCI containers. Unfortunately, this does increase our build time. The postcommit (non full) test now takes 45-60s (up from 30-50s), and the full test is also a little slower. However, building in about one minute is still acceptably fast, and having regular flakey test failures is not acceptable, so this is still a win. If we want to improve on this in the future, we should look into the git shells getting spawned in `config/env.js`. I noticed that they were often involved in the out-of-memory failures. Also, I modified `.circleci/config.yml` so that any branch matching the regular expression `/ci-.*/` will trigger a full build. That makes it easier to test against CI failures. Test plan: I ran about ~10 full builds with this change, and more with similar variations, and they all passed. Verify that the full builds that are run for this commit also all pass! Also, verify that running yarn test locally has unchanged behavior, and running `yarn test --ci` locally lets jest run to completion before running any other test.
2019-07-16 00:51:14 +00:00
} else if (arg === "--ci") {
options.limitMemoryUsage = true;
} else {
throw new Error("unknown argument: " + JSON.stringify(arg));
}
}
return options;
}
/**
* Check whether we're running full CI for a PR created on a fork. In
* this state, Circle CI omits secure environment variables (which is
* good and desired), but this means that we'll have to abort tests.
*/
function isForkedPrFullRun(options) {
if (options.mode !== "FULL") {
return false;
}
if (!process.env["CIRCLE_PR_NUMBER"]) {
// This environment variable is only set on forked PRs.
// https://circleci.com/docs/2.0/env-vars/#built-in-environment-variables
return false;
}
if (process.env["SOURCECRED_GITHUB_TOKEN"]) {
return false;
}
return true;
}
function printForkedPrFullRunErrorMessage() {
console.error(
[
"fatal: cannot run full test suite: missing credentials",
"Tests on forked PRs run without credentials by default. A core team ",
"member will sanity-check your PR and push its head commit to a branch ",
"on the main SourceCred repository, which will re-run these tests.",
].join("\n")
);
}
Fix flakey CI memory issues (#1230) Ever since I upgraded all of the dependencies, we've been having regular CI failures, which seem to share a common root cause of memory exhaustion. Here are some examples: [1], [2]. [1]: https://circleci.com/gh/sourcecred/sourcecred/1246 [2]: https://circleci.com/gh/sourcecred/sourcecred/1239 After some experimentation, I've found that we can solve the issue by ensuring that jest runs on its own in CI, so that it doesn't contend with other tests for memories. Also, I reduce its max workers to 2, which matches the number of CPUs in the CircleCI containers. Unfortunately, this does increase our build time. The postcommit (non full) test now takes 45-60s (up from 30-50s), and the full test is also a little slower. However, building in about one minute is still acceptably fast, and having regular flakey test failures is not acceptable, so this is still a win. If we want to improve on this in the future, we should look into the git shells getting spawned in `config/env.js`. I noticed that they were often involved in the out-of-memory failures. Also, I modified `.circleci/config.yml` so that any branch matching the regular expression `/ci-.*/` will trigger a full build. That makes it easier to test against CI failures. Test plan: I ran about ~10 full builds with this change, and more with similar variations, and they all passed. Verify that the full builds that are run for this commit also all pass! Also, verify that running yarn test locally has unchanged behavior, and running `yarn test --ci` locally lets jest run to completion before running any other test.
2019-07-16 00:51:14 +00:00
function makeTasks(
mode /*: "BASIC" | "FULL" */,
limitMemoryUsage /*: boolean */
) {
const backendOutput = tmp.dirSync({
unsafeCleanup: true,
prefix: "sourcecred-test-",
}).name;
console.log("tmpdir for backend output: " + backendOutput);
function withSourcecredBinEnv(
invocation /*: $ReadOnlyArray<string> */
) /*: string[] */ {
return ["env", "SOURCECRED_BIN=" + backendOutput, ...invocation];
}
function flowCommand(limitMemoryUsage /*: boolean */) {
const cmd = [
"yarn",
"run",
"--silent",
"flow",
"--quiet",
"--max-warnings=0",
];
// Use only one worker to try to avoid flow flakey failures
if (limitMemoryUsage) {
cmd.push("--flowconfig-name", ".flowconfig-ci");
}
return cmd;
}
2018-05-02 23:10:03 +00:00
const basicTasks = [
{
id: "ensure-flow-typing",
cmd: ["./scripts/ensure-flow.sh"],
deps: [],
},
{
// eslint-disable-next-line no-useless-concat
id: "check-stop" + "ships",
// eslint-disable-next-line no-useless-concat
cmd: ["./scripts/check-stop" + "ships.sh"],
deps: [],
},
2018-05-02 23:10:03 +00:00
{
id: "check-pretty",
cmd: ["yarn", "run", "--silent", "check-pretty"],
2018-05-02 23:10:03 +00:00
deps: [],
},
{
id: "lint",
cmd: ["yarn", "run", "--silent", "lint"],
2018-05-02 23:10:03 +00:00
deps: [],
},
{
id: "flow",
cmd: flowCommand(limitMemoryUsage),
2018-05-02 23:10:03 +00:00
deps: [],
},
{
id: "unit",
Fix flakey CI memory issues (#1230) Ever since I upgraded all of the dependencies, we've been having regular CI failures, which seem to share a common root cause of memory exhaustion. Here are some examples: [1], [2]. [1]: https://circleci.com/gh/sourcecred/sourcecred/1246 [2]: https://circleci.com/gh/sourcecred/sourcecred/1239 After some experimentation, I've found that we can solve the issue by ensuring that jest runs on its own in CI, so that it doesn't contend with other tests for memories. Also, I reduce its max workers to 2, which matches the number of CPUs in the CircleCI containers. Unfortunately, this does increase our build time. The postcommit (non full) test now takes 45-60s (up from 30-50s), and the full test is also a little slower. However, building in about one minute is still acceptably fast, and having regular flakey test failures is not acceptable, so this is still a win. If we want to improve on this in the future, we should look into the git shells getting spawned in `config/env.js`. I noticed that they were often involved in the out-of-memory failures. Also, I modified `.circleci/config.yml` so that any branch matching the regular expression `/ci-.*/` will trigger a full build. That makes it easier to test against CI failures. Test plan: I ran about ~10 full builds with this change, and more with similar variations, and they all passed. Verify that the full builds that are run for this commit also all pass! Also, verify that running yarn test locally has unchanged behavior, and running `yarn test --ci` locally lets jest run to completion before running any other test.
2019-07-16 00:51:14 +00:00
cmd: ["yarn", "run", "--silent", "unit", "--ci"],
2018-05-02 23:10:03 +00:00
deps: [],
},
{
id: "check-gnu-coreutils",
cmd: ["./scripts/check-gnu-coreutils.sh"],
deps: [],
},
Add `sharness` for shell-based testing (#597) Summary: We will shortly want to perform testing of shell scripts; it makes the most sense to do so via the shell. We could roll our own testing framework, but it makes more sense to use an existing one. By choosing Sharness, we’re in good company: `go-ipfs` and `go-multihash` use it as well, and it’s derived from Git’s testing library. I like it a lot. For now, we need a dummy test file; our test runner will fail if there are no tests to run. As soon as we have a real test, we can remove this. This commit was generated by following the “per-project installation” instructions at https://github.com/chriscool/sharness, and by additionally including that repository’s `COPYING` file as `SHARNESS_LICENSE`, with a header prepended. I considered instead adding Sharness as a submodule, which is supported and has clear advantages (e.g., you can update the thing), but opted to avoid the complexity of submodules for now. Test Plan: Create the following tests in the `sharness` directory: ```shell $ cat sharness/good.t #!/bin/sh test_description='demo of passing tests' . ./sharness.sh test_expect_success "look at me go" true test_expect_success EXPENSIVE "this may take a while" 'sleep 2' test_done # vim: ft=sh $ cat sharness/bad.t #!/bin/sh test_description='demo of failing tests' . ./sharness.sh test_expect_success "I don't feel so good" false test_done # vim: ft=sh ``` Note that `yarn sharness` and `yarn test` fail appropriately. Note that `yarn sharness-full` fails appropriately after taking two extra seconds, and `yarn test --full` runs the latter. Each failure message should print the name of the failing test case, not just the suite name, and should indicate that the passing tests passed. Then, remove `sharness/bad.t`, and note that the above commands all pass, with the `--full` variants still taking longer. Finally, remove `sharness/good.t`, and note that the above commands all pass (and all pass quickly). wchargin-branch: add-sharness
2018-08-06 19:56:25 +00:00
{
id: "backend",
Add `sharness` for shell-based testing (#597) Summary: We will shortly want to perform testing of shell scripts; it makes the most sense to do so via the shell. We could roll our own testing framework, but it makes more sense to use an existing one. By choosing Sharness, we’re in good company: `go-ipfs` and `go-multihash` use it as well, and it’s derived from Git’s testing library. I like it a lot. For now, we need a dummy test file; our test runner will fail if there are no tests to run. As soon as we have a real test, we can remove this. This commit was generated by following the “per-project installation” instructions at https://github.com/chriscool/sharness, and by additionally including that repository’s `COPYING` file as `SHARNESS_LICENSE`, with a header prepended. I considered instead adding Sharness as a submodule, which is supported and has clear advantages (e.g., you can update the thing), but opted to avoid the complexity of submodules for now. Test Plan: Create the following tests in the `sharness` directory: ```shell $ cat sharness/good.t #!/bin/sh test_description='demo of passing tests' . ./sharness.sh test_expect_success "look at me go" true test_expect_success EXPENSIVE "this may take a while" 'sleep 2' test_done # vim: ft=sh $ cat sharness/bad.t #!/bin/sh test_description='demo of failing tests' . ./sharness.sh test_expect_success "I don't feel so good" false test_done # vim: ft=sh ``` Note that `yarn sharness` and `yarn test` fail appropriately. Note that `yarn sharness-full` fails appropriately after taking two extra seconds, and `yarn test --full` runs the latter. Each failure message should print the name of the failing test case, not just the suite name, and should indicate that the passing tests passed. Then, remove `sharness/bad.t`, and note that the above commands all pass, with the `--full` variants still taking longer. Finally, remove `sharness/good.t`, and note that the above commands all pass (and all pass quickly). wchargin-branch: add-sharness
2018-08-06 19:56:25 +00:00
cmd: [
"yarn",
Add `sharness` for shell-based testing (#597) Summary: We will shortly want to perform testing of shell scripts; it makes the most sense to do so via the shell. We could roll our own testing framework, but it makes more sense to use an existing one. By choosing Sharness, we’re in good company: `go-ipfs` and `go-multihash` use it as well, and it’s derived from Git’s testing library. I like it a lot. For now, we need a dummy test file; our test runner will fail if there are no tests to run. As soon as we have a real test, we can remove this. This commit was generated by following the “per-project installation” instructions at https://github.com/chriscool/sharness, and by additionally including that repository’s `COPYING` file as `SHARNESS_LICENSE`, with a header prepended. I considered instead adding Sharness as a submodule, which is supported and has clear advantages (e.g., you can update the thing), but opted to avoid the complexity of submodules for now. Test Plan: Create the following tests in the `sharness` directory: ```shell $ cat sharness/good.t #!/bin/sh test_description='demo of passing tests' . ./sharness.sh test_expect_success "look at me go" true test_expect_success EXPENSIVE "this may take a while" 'sleep 2' test_done # vim: ft=sh $ cat sharness/bad.t #!/bin/sh test_description='demo of failing tests' . ./sharness.sh test_expect_success "I don't feel so good" false test_done # vim: ft=sh ``` Note that `yarn sharness` and `yarn test` fail appropriately. Note that `yarn sharness-full` fails appropriately after taking two extra seconds, and `yarn test --full` runs the latter. Each failure message should print the name of the failing test case, not just the suite name, and should indicate that the passing tests passed. Then, remove `sharness/bad.t`, and note that the above commands all pass, with the `--full` variants still taking longer. Finally, remove `sharness/good.t`, and note that the above commands all pass (and all pass quickly). wchargin-branch: add-sharness
2018-08-06 19:56:25 +00:00
"run",
"--silent",
"backend",
"--output-path",
backendOutput,
Add `sharness` for shell-based testing (#597) Summary: We will shortly want to perform testing of shell scripts; it makes the most sense to do so via the shell. We could roll our own testing framework, but it makes more sense to use an existing one. By choosing Sharness, we’re in good company: `go-ipfs` and `go-multihash` use it as well, and it’s derived from Git’s testing library. I like it a lot. For now, we need a dummy test file; our test runner will fail if there are no tests to run. As soon as we have a real test, we can remove this. This commit was generated by following the “per-project installation” instructions at https://github.com/chriscool/sharness, and by additionally including that repository’s `COPYING` file as `SHARNESS_LICENSE`, with a header prepended. I considered instead adding Sharness as a submodule, which is supported and has clear advantages (e.g., you can update the thing), but opted to avoid the complexity of submodules for now. Test Plan: Create the following tests in the `sharness` directory: ```shell $ cat sharness/good.t #!/bin/sh test_description='demo of passing tests' . ./sharness.sh test_expect_success "look at me go" true test_expect_success EXPENSIVE "this may take a while" 'sleep 2' test_done # vim: ft=sh $ cat sharness/bad.t #!/bin/sh test_description='demo of failing tests' . ./sharness.sh test_expect_success "I don't feel so good" false test_done # vim: ft=sh ``` Note that `yarn sharness` and `yarn test` fail appropriately. Note that `yarn sharness-full` fails appropriately after taking two extra seconds, and `yarn test --full` runs the latter. Each failure message should print the name of the failing test case, not just the suite name, and should indicate that the passing tests passed. Then, remove `sharness/bad.t`, and note that the above commands all pass, with the `--full` variants still taking longer. Finally, remove `sharness/good.t`, and note that the above commands all pass (and all pass quickly). wchargin-branch: add-sharness
2018-08-06 19:56:25 +00:00
],
deps: [],
},
2018-05-02 23:10:03 +00:00
{
id: {BASIC: "sharness", FULL: "sharness-full"}[mode],
cmd: withSourcecredBinEnv([
"yarn",
"run",
"--silent",
{BASIC: "sharness", FULL: "sharness-full"}[mode],
]),
deps: ["backend", "check-gnu-coreutils"],
2018-05-02 23:10:03 +00:00
},
];
const extraTasks = [
{
id: "fetchGithubRepoTest",
cmd: withSourcecredBinEnv([
"./src/plugins/github/fetchGithubRepoTest.sh",
"--no-build",
]),
deps: ["backend"],
},
{
id: "fetchGithubOrgTest",
cmd: withSourcecredBinEnv([
"./src/plugins/github/fetchGithubOrgTest.sh",
"--no-build",
]),
deps: ["backend"],
},
2018-05-02 23:10:03 +00:00
];
Fix flakey CI memory issues (#1230) Ever since I upgraded all of the dependencies, we've been having regular CI failures, which seem to share a common root cause of memory exhaustion. Here are some examples: [1], [2]. [1]: https://circleci.com/gh/sourcecred/sourcecred/1246 [2]: https://circleci.com/gh/sourcecred/sourcecred/1239 After some experimentation, I've found that we can solve the issue by ensuring that jest runs on its own in CI, so that it doesn't contend with other tests for memories. Also, I reduce its max workers to 2, which matches the number of CPUs in the CircleCI containers. Unfortunately, this does increase our build time. The postcommit (non full) test now takes 45-60s (up from 30-50s), and the full test is also a little slower. However, building in about one minute is still acceptably fast, and having regular flakey test failures is not acceptable, so this is still a win. If we want to improve on this in the future, we should look into the git shells getting spawned in `config/env.js`. I noticed that they were often involved in the out-of-memory failures. Also, I modified `.circleci/config.yml` so that any branch matching the regular expression `/ci-.*/` will trigger a full build. That makes it easier to test against CI failures. Test plan: I ran about ~10 full builds with this change, and more with similar variations, and they all passed. Verify that the full builds that are run for this commit also all pass! Also, verify that running yarn test locally has unchanged behavior, and running `yarn test --ci` locally lets jest run to completion before running any other test.
2019-07-16 00:51:14 +00:00
const tasks = (function() {
switch (mode) {
case "BASIC":
return basicTasks;
case "FULL":
return [].concat(basicTasks, extraTasks);
default:
/*:: (mode: empty); */ throw new Error(mode);
}
})();
if (limitMemoryUsage) {
// We've had issues with our tests flakily failing in CI, due to apparent
// memory issues.
//
// This block attempts to limit memory usage by having flow to run first,
// then stopping the flow server, then running unit tests, and only
// afterwards running all other tasks.
//
// The reasoning is that the flow server is fairly memory demanding and we
// can safely kill it after we've checked the types, and jest is also quite
// memory intensive. Hopefully by finishing these tasks first and releasing
// their resources, we won't have more memory exhaustion.
Fix flakey CI memory issues (#1230) Ever since I upgraded all of the dependencies, we've been having regular CI failures, which seem to share a common root cause of memory exhaustion. Here are some examples: [1], [2]. [1]: https://circleci.com/gh/sourcecred/sourcecred/1246 [2]: https://circleci.com/gh/sourcecred/sourcecred/1239 After some experimentation, I've found that we can solve the issue by ensuring that jest runs on its own in CI, so that it doesn't contend with other tests for memories. Also, I reduce its max workers to 2, which matches the number of CPUs in the CircleCI containers. Unfortunately, this does increase our build time. The postcommit (non full) test now takes 45-60s (up from 30-50s), and the full test is also a little slower. However, building in about one minute is still acceptably fast, and having regular flakey test failures is not acceptable, so this is still a win. If we want to improve on this in the future, we should look into the git shells getting spawned in `config/env.js`. I noticed that they were often involved in the out-of-memory failures. Also, I modified `.circleci/config.yml` so that any branch matching the regular expression `/ci-.*/` will trigger a full build. That makes it easier to test against CI failures. Test plan: I ran about ~10 full builds with this change, and more with similar variations, and they all passed. Verify that the full builds that are run for this commit also all pass! Also, verify that running yarn test locally has unchanged behavior, and running `yarn test --ci` locally lets jest run to completion before running any other test.
2019-07-16 00:51:14 +00:00
tasks.forEach((task) => {
switch (task.id) {
case "flow":
// Run flow first
return;
case "unit":
task.cmd.push("--maxWorkers=2");
// Run unit after we _stopped_ the flow server
// (to free up memory from flow)
task.deps.push("flow-stop");
return;
default:
// Run everything else after unit tests
// (unit is a memory hog)
task.deps.push("unit");
Fix flakey CI memory issues (#1230) Ever since I upgraded all of the dependencies, we've been having regular CI failures, which seem to share a common root cause of memory exhaustion. Here are some examples: [1], [2]. [1]: https://circleci.com/gh/sourcecred/sourcecred/1246 [2]: https://circleci.com/gh/sourcecred/sourcecred/1239 After some experimentation, I've found that we can solve the issue by ensuring that jest runs on its own in CI, so that it doesn't contend with other tests for memories. Also, I reduce its max workers to 2, which matches the number of CPUs in the CircleCI containers. Unfortunately, this does increase our build time. The postcommit (non full) test now takes 45-60s (up from 30-50s), and the full test is also a little slower. However, building in about one minute is still acceptably fast, and having regular flakey test failures is not acceptable, so this is still a win. If we want to improve on this in the future, we should look into the git shells getting spawned in `config/env.js`. I noticed that they were often involved in the out-of-memory failures. Also, I modified `.circleci/config.yml` so that any branch matching the regular expression `/ci-.*/` will trigger a full build. That makes it easier to test against CI failures. Test plan: I ran about ~10 full builds with this change, and more with similar variations, and they all passed. Verify that the full builds that are run for this commit also all pass! Also, verify that running yarn test locally has unchanged behavior, and running `yarn test --ci` locally lets jest run to completion before running any other test.
2019-07-16 00:51:14 +00:00
}
});
const flowStopTask /*: Task */ = {
id: "flow-stop",
cmd: ["yarn", "run", "--silent", "flow", "stop"],
deps: ["flow"],
};
tasks.push(flowStopTask);
2018-05-02 23:10:03 +00:00
}
Fix flakey CI memory issues (#1230) Ever since I upgraded all of the dependencies, we've been having regular CI failures, which seem to share a common root cause of memory exhaustion. Here are some examples: [1], [2]. [1]: https://circleci.com/gh/sourcecred/sourcecred/1246 [2]: https://circleci.com/gh/sourcecred/sourcecred/1239 After some experimentation, I've found that we can solve the issue by ensuring that jest runs on its own in CI, so that it doesn't contend with other tests for memories. Also, I reduce its max workers to 2, which matches the number of CPUs in the CircleCI containers. Unfortunately, this does increase our build time. The postcommit (non full) test now takes 45-60s (up from 30-50s), and the full test is also a little slower. However, building in about one minute is still acceptably fast, and having regular flakey test failures is not acceptable, so this is still a win. If we want to improve on this in the future, we should look into the git shells getting spawned in `config/env.js`. I noticed that they were often involved in the out-of-memory failures. Also, I modified `.circleci/config.yml` so that any branch matching the regular expression `/ci-.*/` will trigger a full build. That makes it easier to test against CI failures. Test plan: I ran about ~10 full builds with this change, and more with similar variations, and they all passed. Verify that the full builds that are run for this commit also all pass! Also, verify that running yarn test locally has unchanged behavior, and running `yarn test --ci` locally lets jest run to completion before running any other test.
2019-07-16 00:51:14 +00:00
return tasks;
2018-05-02 23:10:03 +00:00
}