This is a minor change to the Discourse mirror so that it supports a
query to get all users from the server. It will be convenient for a
followon change which makes `update` search for every user's likes.
I also modified createGraph so that it uses the new method, which
results in code that is cleaner and slightly more efficient.
Test plan: Unit tests updated.
For the Discourse plugin, we really want to be able to add a full record
of all of the users' liked posts as edges in the graph. It's a really
high-signal way to move cred, that also gives individual users a lot of
agency and way to engage.
However: we need an API to get this data. Initial searches of API docs
were un-promising; fisrt, we would need to query potentially every post
to get its likes individually (makes it very expensive to find the likes
on old posts), and second, the likes did not come with timestamp
information. For a while, I thought we were at an impasse.
I then went fishing in the Discourse implementation for a solution (yay
open source!). Lots of the API is un-documented, since it's whatever
they happen to add to run Discourse. And it turns out there's a
`user_actions` API ([source]) which can provide all of a user's actions
in order, and having your content liked by someone else is considered an
action. Best of all, these actions come with timestamps.
The upshot is that instead of querying every post to get its likes, we
can query every user to get likes. Iterating over all users can still
be slow, but it's far better than iterating over all posts; plus we can
implement caching so that we only infrequently check in on inactive
users.
I've added a `likesByUser` method to the Discourse fetch interface that
provides this information. I've also added a snapshot test for it (and
updated all of the snapshots). I also rolled in a slight refactor to
error handling in the fetcher.
The mirror doesn't yet use this information (will come later).
[source]: 82e07cb0f4/app/controllers/user_actions_controller.rb (L3)
Test plan: `yarn test` passes. Snapshots look good.
This commit adds the logic needed for creating a contribution graph
based on the Discourse data. We first have a declaration with
specifications for the node and edge types in the plugin. We also have a
`createGraph` module which creates a conformant graph from the Mirror
data. The graph creation is thoroughly tested.
Test plan: Inspect unit tests, run `yarn test`. I also have (yet
unpublished) code which loads the graph into the UI, and it appears
fine.
This is a quick fixup so that the coming createGraph module can be
properly tested.
Shout out to @Beanow for anticipating this need in a [review comment].
[review comment]: https://github.com/sourcecred/sourcecred/pull/1266#discussion_r314305108
Test plan: trivial refactor, run `yarn test`
The mirror wraps a SQLite database which will store all of the data we
download from Discourse.
On a call to `update`, it downloads new data from the server and stores
it. Then, when it is asked for information like the topics and posts, it
can just pull from its local copy. This means that we don't need to
re-download the content every time we load a Discourse instance, which
makes the load more performant, more robust to network failures, etc.
Thanks to @wchargin, whose work on the GraphQL mirror for GitHub (#622)
inspired this mirror.
Test plan: I've written unit tests that use a mock fetcher to validate
the update logic. I've also used this to do a full load of the real
SourceCred Discourse instance, and to create a corresponding graph
(using subsequent commits).
Progress towards #865.
The `DiscourseFetcher` class abstracts over fetching from the Discourse
API, and post-processing and filtering the result into a form that's
convenient for us.
Testing is a bit tricky because the Discourse API keys are sensitive
(they are admin keys) and so I'm reluctant to commit them, even for our
test instance. As a workaround, I've added a shell script which
downloads some data from the SourceCred test instance, and saves it with
a filename which is an encoding of the actual endpoint. Then, in
testing, we can use a mocked fetch which actually hits the snapshots
directory, and thus validate the processing logic on "real" data from
the server. We also test that the fetch headers are set correctly, and
that we handle non-200 error codes appropriately.
Test plan: In addition to the included tests, I have an end-to-end test
which actually uses this fetcher to fully populate the mirror and then
generate a valid SourceCred graph.
This builds on API investigations
[here](https://github.com/sourcecred/sourcecred/issues/865#issuecomment-478026449),
and is general progress towards #865. Thanks to @erlend-sh, without whom
we wouldn't have a test instance.
Summary:
In ES6, the [`try` statement grammar][1] requires a catch parameter; the
parameter is only optional in the latest draft of ECMAScript, which is
of course not yet ratified as any actual standard.
Even though we don’t officially pledge to support Node 8, this is
currently the only breakage, and it’s easy enough to fix.
[1]: https://www.ecma-international.org/ecma-262/6.0/#sec-try-statement
Test Plan:
Running `yarn start` on Node v8.11.4 no longer raises a syntax error.
wchargin-branch: catch-parameter
Summary:
Introduced in #1277.
Test Plan:
Run `yarn start` and visit <http://localhost:8080/test/FileUploader/>.
Conduct the test plan as specified on that page.
wchargin-branch: fileuploader-target
I'm mostly motivated by wanting to get greenkeeper lockfile
auto-updating working (see #1269) although this is also a first step
towards making SourceCred usable from NPM (#1232).
For now, see this as us making sure we claim the sourcecred package name
on npm (see: https://www.npmjs.com/package/sourcecred).
I also fixed the license spec so that it's valid SPDX.
Summary:
To elaborate a bit: The repository-level `.gitignore` file is for
artifacts that are generated _by the code/build of that project_. This
includes `node_modules/`, `bin/`, `build/`, etc. These should be
necessary for all users of the project.
The user-level `.gitignore_global` file is for files that _your system_
generates. These are swap files (`.swp` `.swo` `.swa` for Vim), file
system metadata (`.DS_Store` for macOS, `Thumbs.db` for Windows), trash
directories, etc.
(See `man gitignore` for details about the two files. Take a look at
[the `.gitignore` for Git itself][git-gitignore] as an example.)
[git-gitignore]: https://github.com/git/git/blob/master/.gitignore
It doesn’t make sense to put the latter category of patterns into the
project’s `.gitignore`. You can’t accommodate every programming
environment under the sun. The file would be hundreds of lines.
By removing these patterns from the `.gitignore`, we help teach users
about how to configure `.gitignore_global` to set up their own
environment properly, once and for all.
This reverts commit 816c954f3d.
Test Plan:
The `.gitignore` now only contains patterns specific to SourceCred.
wchargin-branch: gitignore-project-only
Summary:
Backticks are discouraged relative to the `$(…)` form for command
substitution, because they are harder to read and do not nest without
exponential escaping:
```shell
$ foo=$(echo $(echo hi $(echo bye))) # clear
$ foo=`echo \`echo hi \\\`echo bye\\\`\`` # hmm
```
In this context, command substitution should not be used at all; `PWD`
is a special variable that always contains the current working
directory. The new version of the code will be correct even if the
current working directory ends with whitespace that would be stripped
off by the command substitution.
Test Plan:
Prepended an `echo` to the relevant line, and verified that the script
has the same output before and after this change.
wchargin-branch: fix-backticks
This updates the deploys script so that we now load full projects for
@libp2p, @ipld, @sourcecred, and @filecoin-project.
I'd like to support @ipfs, but we need to tackle #1256 first.
The code is mostly ported from the legacy app. However, we no longer
assume that we are showing every type for every plugin. Instead, the
types are manually selected. For now, we permit the GitHub user type,
and the GitHub repo type, as these are the two types that are included
in filtered timeline cred.
Test plan: Manual inspection is necessary, since this frontend is mostly
untested. I've done that inspection. Also, `yarn test` passes.
Minor change to the API for MapUtil.pushValue. Now it returns the
resultant array. I've found this convenient in at least one case, and
previously we weren't returning anything, so it's a cheap change.
Test plan: Unit test added.
Summary:
In #1259, `flow-bin` was upgraded to 0.104.0 in `package.json`, but no
corresponding change was made in the lock file.
Test Plan:
Running `yarn` is now a no-op.
wchargin-branch: lock-flow-bin-0.104.0
Summary:
[Prettier docs] recommend pinning an exact version because their semver
policy does not extend to stylistic changes, and so patch releases may
change the formatting output.
Given some recent discussion about formatting skew of unknown cause,
this seems like a reasonable safety measure.
Generated with `yarn add --dev --exact prettier`.
[Prettier docs]: https://prettier.io/docs/en/install.html
wchargin-branch: prettier-exact
I moved sourcecred/example-git{,hub} to the @sourcecred-test org.
This commit fixes the build given that move.
I've realized that in #1233 I in-advertently made some Git tests that
depend on a snapshot un-updateable. I'm going to compound on that slight
technical debt by skipping the tests that depended on that snapshot. I
recognize and accept that I'll need to pay this down when I resuscitate
the git plugin.
Test plan: `yarn test --full`.
This commit removes the `pagerank` and `analyze` commands (both of which
never saw real usage), removes the outdated adapter-based `loadGraph`
method, and removes all traces of the analysis adapters.
It builds on work in #1233 and #1136.
Test plan: `yarn test --full` passes.
This commit swaps usage over to the new implementation of `cli/load`
(the one that wraps `api/load`) and makes changes throughout the project
to accomodate that we now track instances by Project rather than by
RepoId.
Test plan: Unit tests updated; run `yarn test --full`. Also, for safety:
actually load a project (by whole org, why not) and verify that the
frontend still works.
I'm re-organizing SC data to be oriented on the graph, rather than on
plugin-specific data structures. So there is no longer a need for the
git loading logic which orients around saving a repository.json file
that's been potentially merged across repos, or indeed the logic for
merging repositories at all. So I'm removing `git/loadGitData`,
`git/mergeRepository`, and relatives.
Test plan: `yarn test --full` passes.
There's no need for us to depend on `mkdirp`, because the `fs-extra`
module already has `fs.mkdirp` and `fs.mkdirpSync`. This commit removes
the dep from our `package.json`, and removes all explicit imports of it.
Test plan: `yarn test --full` passes. `git grep "import mkdirp"` has no
hits.
This adds a new module, `api/load`, which implements the logic that will
underly the new `sourcecred load` command. The `api` package is a new
folder that will contain the logic that powers the CLI (but will be
callable directly as we improve SourceCred). As a heuristic, nontrivial
logic in `cli/` should be factored out to `api/`.
In the future, we will likely want to refactor these APIs to
make them more atomic/composable. `api/load` does "all the things" in
terms of loading data, computing cred, and writing it to disk. I'm going
with the simplest approach here (mirroring existing functionality) so
that we can merge #1233 and realize its many benefits more easily.
This work is factored out of #1233. Thanks to @Beanow for [review]
of the module, which resulted in several changes (e.g. organizing it
under api/, having the TaskReporter be dependency injected).
[review]: https://github.com/sourcecred/sourcecred/pull/1233#pullrequestreview-263633643
Test plan: `api/load` is tested (via mocking unit tests). Run `yarn test`
This commit refactors the `util/taskReporter` module so that
`TaskReporter` is an interface; the class previously called
`TaskReporter` is renamed to `LoggingTaskReporter`. We also export a
`TestTaskReporter` which implements the interface, and is very easy to
test.
The motivation: This will make it much easier to write tested code that
uses a `TaskReporter`, as now the test code can provide a
`TestTaskReporter` and check that all tasks get finished, that task
ids are as expected, etc.
Test plan: The `TestTaskReporter` is tested. Run `yarn test`.
Throughout the codebase, we freeze objects when we want to ensure that
their properties are never altered -- e.g. because they are a plugin
declaration, or are being re-used for various test cases.
We generally use `Object.freeze`. This has the disadvantage that it does
not work recursively, so a frozen object's mutable fields and properties
can still be mutated. (E.g. if `const obj = Object.freeze({foo: []})`,
then `obj.foo.push(1)` will succeed in mutating the 'frozen' object).
Sometimes we anticipate this and explicitly freeze the sub-fields (which
is tedious); sometimes we forget (which invites errors). This change
simply replaces all instances of Object.freeze with [deep-freeze], so we
don't need to worry about the issue at all anymore.
Test plan: `yarn test` passes (after updating snapshots);
`git grep Object.freeze` returns no hits.
[deep-freeze]: https://www.npmjs.com/package/deep-freeze
This is a replacement for `github/loadGithubData` which returns a
combined Graph rather than a combined RelationalView. This provides a
major benefit, which is that we can use the (robust) Graph merge logic
rather than the (buggy) relational view merge.
Test plan: This function is untested. It basically pipelines a few APIs
together. I think that flow is basically sufficient to validate that it
works, and writing a unit test will be frustrating (mostly will involve
re-integrating the funcitonality via mocks). A future commit makes this
part of the pipeline that generates snapshot tests, so it is de-facto
integration tested.
This module builds on the project logic added in #1238, and makes it
easy to create projects based on a simple string configuration.
Basically, the spec `foo/bar` creates a project containing just the repo
foo/bar, and the spec `@foo` creates a project containing all of the
repos from the user/organization named foo.
This is pulled out of #1233, but I've enhanced it to support
organizations out of the box.
The method is thoroughly tested.
Test plan: `yarn test --full` still passes. Also, I've ensured that the
async `_getProjectIds` is still usable in our webpack configs (via
modifying and testing the dependent commits).
This creates a new `Project` type which will replace `RepoId` as the
index type for saving and loading data.
The basic data type is added to `project.js`. Rather than having a
`RepoIdRegistry`, I intend to infer the registry at build time by
scanning for available projects saved in the sourcecred directory. I've
added the `project_io` module for this task. It has methods for setting
up a project subdirectory, and loading the `Project` info from that
subdirectory.
To ensure that projects ids can be encoded even if they have symbols
like `/` and `@`, we base64 encode them.
To ensure that project ids can be retrieved at build time, the
`getProjectIds` method is factored out into its own plain ECMAScript
module. For all non-build time needs, it is re-exported from
`project_io`.
Test plan: Unit tests added; run `yarn test`.
See #1243 for context. This is basically a more aggressive version of
pull #1230 -- instead of just running unit tests in isolation, we also
run flow in isolation, and kill the servers afterwards.
Test plan: See how this fares in CI :)