issues and request tests
This commit is contained in:
parent
f4fc02f3fe
commit
3a8c154b71
47
README.md
47
README.md
|
@ -5,50 +5,3 @@ GitHub Burndown Chart as a service. Answers the question "are my projects on tra
|
|||
[![Build Status](http://img.shields.io/codeship/<ID_HERE>.svg?style=flat)](<URL_HERE>)
|
||||
[![Dependencies](http://img.shields.io/david/radekstepan/burnchart.svg?style=flat)](https://david-dm.org/radekstepan/burnchart)
|
||||
[![License](http://img.shields.io/badge/license-AGPL--3.0-red.svg?style=flat)](LICENSE)
|
||||
|
||||
##Notes
|
||||
|
||||
- *payment gateways* in Canada: [Shopify](http://www.shopify.com/payment-gateways/canada), [Chargify](http://chargify.com/payment-gateways/) list; I get free processing on first $1000 with [Stripe](https://education.github.com/pack/offers)
|
||||
- start people on a *Community* plan showing them a comparison table to upgrade to a better offering
|
||||
- community (open source, local storage), business (private repos, firebase)
|
||||
- keep discussion going via [gitter](http://gitter.im) or have people comment from the app via [helpful](https://helpful.io/)
|
||||
- [credit card form](http://designmodo.com/ux-credit-card-payment-form/) ux from Designmodo
|
||||
- workers: using a free instance of IronWorker and assuming 5s runtime each time gives us a poll every 6 minutes. Zapier would poll every 15 minutes but already integrates Stripe and FB.
|
||||
- worst case scenario I provide even Small Business plan for free and provide a better experience
|
||||
- $2.5 Node.js PaaS via Gandi with promo code `PAASLAUNCH-C50E-B077-A317`.
|
||||
- let people vote on features they want to see fast: [tally.tl](http://tally.tl/).
|
||||
- use [readme.io](https://readme.io/) for documentation
|
||||
- send handwritten thank you cards to the first customers
|
||||
- use [DigitalOcean](https://www.digitalocean.com/) as a GitHub Student (@bath.edu email) to get $100 in platform credits which translates to 20 months on the slowest (fast enough) dyno
|
||||
- payments need to be automatic, why penalize users that are loyal to us with a burden of an admin task?
|
||||
- ability to use the program needs to be frictionless; jump straight into the action, fill in data behind the scenes etc.
|
||||
- send reminders to people whose account is expiring
|
||||
- [Waffle](https://waffle.io/) from Rally software has a kanban board that will support [burnup chart](https://waffle.io/waffleio/waffle.io/cards/53e5347682b317f7d9ad6eac); it will charge $7/month
|
||||
- should we be part of https://www.zenhub.io/pricing? Pricing is per bundle of users, interesting.
|
||||
|
||||
##Plans
|
||||
|
||||
###Community Plan
|
||||
|
||||
- your repos are saved locally
|
||||
- no auto-updates to milestones, everything fetched on page load
|
||||
- no private repos
|
||||
|
||||
###Business Plan
|
||||
|
||||
- you need to pay for a license to use the app for business purposes
|
||||
- repos, milestones saved remotely
|
||||
- auto-update with new information
|
||||
- private repos
|
||||
|
||||
###Free Forever Business Plan (= Community Shareholder/Partners Plan)
|
||||
|
||||
I can't sell people on free membership, that is only a small incentive. But I can sell them on an app that does what they want. Have early access to features etc. If someone sees that my app can help them, why not tell me about it so I can make it happen?
|
||||
|
||||
I could also provide people with Assembly coins for each feedback session I've had with them, thus making them share in the profits. They are basically startup members with equity by being Product Developers.
|
||||
|
||||
To qualify, these people need to be businesses actively using the software. Thus being standin-users for other such $ paying businesses.
|
||||
|
||||
Let me call you every 3 months to ask how you are doing, how you are using the software, what can I improve, and you will get 3 months usage for free. The idea is to keep in touch with the most loyal customers, to hear them say how great/shabby the app is. If they don't want to talk they can always pay for the Business Plan.
|
||||
|
||||
If someone stops using the app, send them an email asking them for a good time to call so I can make things right. They would get 3 months usage as well.
|
259
docs/NOTES.md
259
docs/NOTES.md
|
@ -1,237 +1,46 @@
|
|||
#Firebase Notes
|
||||
#Notes
|
||||
|
||||
##Write
|
||||
- *payment gateways* in Canada: [Shopify](http://www.shopify.com/payment-gateways/canada), [Chargify](http://chargify.com/payment-gateways/) list; I get free processing on first $1000 with [Stripe](https://education.github.com/pack/offers)
|
||||
- start people on a *Community* plan showing them a comparison table to upgrade to a better offering
|
||||
- community (open source, local storage), business (private repos, firebase)
|
||||
- keep discussion going via [gitter](http://gitter.im) or have people comment from the app via [helpful](https://helpful.io/)
|
||||
- [credit card form](http://designmodo.com/ux-credit-card-payment-form/) ux from Designmodo
|
||||
- workers: using a free instance of IronWorker and assuming 5s runtime each time gives us a poll every 6 minutes. Zapier would poll every 15 minutes but already integrates Stripe and FB.
|
||||
- worst case scenario I provide even Small Business plan for free and provide a better experience
|
||||
- $2.5 Node.js PaaS via Gandi with promo code `PAASLAUNCH-C50E-B077-A317`.
|
||||
- let people vote on features they want to see fast: [tally.tl](http://tally.tl/).
|
||||
- use [readme.io](https://readme.io/) for documentation
|
||||
- send handwritten thank you cards to the first customers
|
||||
- use [DigitalOcean](https://www.digitalocean.com/) as a GitHub Student (@bath.edu email) to get $100 in platform credits which translates to 20 months on the slowest (fast enough) dyno
|
||||
- payments need to be automatic, why penalize users that are loyal to us with a burden of an admin task?
|
||||
- ability to use the program needs to be frictionless; jump straight into the action, fill in data behind the scenes etc.
|
||||
- send reminders to people whose account is expiring
|
||||
- [Waffle](https://waffle.io/) from Rally software has a kanban board that will support [burnup chart](https://waffle.io/waffleio/waffle.io/cards/53e5347682b317f7d9ad6eac); it will charge $7/month
|
||||
- should we be part of https://www.zenhub.io/pricing? Pricing is per bundle of users, interesting.
|
||||
|
||||
Access a child from a root (db) reference:
|
||||
##Plans
|
||||
|
||||
rootRef.child('users/mchen/name');
|
||||
###Community Plan
|
||||
|
||||
Save a new user to the db:
|
||||
- your repos are saved locally
|
||||
- no auto-updates to milestones, everything fetched on page load
|
||||
- no private repos
|
||||
|
||||
var usersRef = ref.child("users");
|
||||
usersRef.set({
|
||||
alanisawesome: {
|
||||
date_of_birth: "June 23, 1912",
|
||||
full_name: "Alan Turing"
|
||||
},
|
||||
gracehop: {
|
||||
date_of_birth: "December 9, 1906",
|
||||
full_name: "Grace Hopper"
|
||||
}
|
||||
});
|
||||
###Business Plan
|
||||
|
||||
Flatten all data otherwise we are retrieving all children.
|
||||
- you need to pay for a license to use the app for business purposes
|
||||
- repos, milestones saved remotely
|
||||
- auto-update with new information
|
||||
- private repos
|
||||
|
||||
Check if we have a member of a group which could be used to check if we have a GitHub user stored in the db:
|
||||
###Free Forever Business Plan (= Community Shareholder/Partners Plan)
|
||||
|
||||
// see if Mary is in the 'alpha' group
|
||||
var ref = new Firebase("https://docs-examples.firebaseio.com/web/org/users/mchen/groups/alpha");
|
||||
ref.once('value', function(snap) {
|
||||
var result = snap.val() === null? 'is not' : 'is';
|
||||
console.log('Mary ' + result + ' a member of alpha group');
|
||||
});
|
||||
I can't sell people on free membership, that is only a small incentive. But I can sell them on an app that does what they want. Have early access to features etc. If someone sees that my app can help them, why not tell me about it so I can make it happen?
|
||||
|
||||
##Read
|
||||
I could also provide people with Assembly coins for each feedback session I've had with them, thus making them share in the profits. They are basically startup members with equity by being Product Developers.
|
||||
|
||||
The following should get triggered everytime we add a new repo to our list, updating our local (Ractive) ref. It gets called for every existing member too.
|
||||
To qualify, these people need to be businesses actively using the software. Thus being standin-users for other such $ paying businesses.
|
||||
|
||||
// Get a reference to our posts
|
||||
var postsRef = new Firebase("https://docs-examples.firebaseio.com/web/saving-data/fireblog/posts");
|
||||
Let me call you every 3 months to ask how you are doing, how you are using the software, what can I improve, and you will get 3 months usage for free. The idea is to keep in touch with the most loyal customers, to hear them say how great/shabby the app is. If they don't want to talk they can always pay for the Business Plan.
|
||||
|
||||
// Retrieve new posts as they are added to Firebase
|
||||
postsRef.on('child_added', function (snapshot) {
|
||||
var newPost = snapshot.val();
|
||||
console.log("Author: " + newPost.author);
|
||||
console.log("Title: " + newPost.title);
|
||||
});
|
||||
|
||||
Changes can be monitored like the following. This should work even in offline mode so we should not be changing our local state but Firebase state which calls us back with the changes.
|
||||
|
||||
// Get a reference to our posts
|
||||
var postsRef = new Firebase("https://docs-examples.firebaseio.com/web/saving-data/fireblog/posts");
|
||||
|
||||
// Get the data on a post that has changed
|
||||
postsRef.on('child_changed', function (snapshot) {
|
||||
var changedPost = snapshot.val();
|
||||
console.log('The updated post title is ' + changedPost.title);
|
||||
});
|
||||
|
||||
When we remove a repo:
|
||||
|
||||
// Get a reference to our posts
|
||||
var postsRef = new Firebase("https://docs-examples.firebaseio.com/web/saving-data/fireblog/posts");
|
||||
|
||||
// Get the data on a post that has been removed
|
||||
postsRef.on('child_removed', function (snapshot) {
|
||||
var deletedPost = snapshot.val();
|
||||
console.log('The blog post titled' + deletedPost.title + ' has been deleted');
|
||||
});
|
||||
|
||||
##Security
|
||||
|
||||
Write new users, do not update them, but allow delete.
|
||||
|
||||
// we can write as long as old data or new data does not exist
|
||||
// in other words, if this is a delete or a create, but not an update
|
||||
".write": "!data.exists() || !newData.exists()"
|
||||
|
||||
Accessing dynamic paths in the rules can be done using a `$` prefix. This serves as a wild card, and stores the value of that key for use inside the rules declarations:
|
||||
|
||||
{
|
||||
"rules": {
|
||||
"rooms": {
|
||||
// this rule applies to any child of /rooms/, the key for each room id
|
||||
// is stored inside $room_id variable for reference
|
||||
"$room_id": {
|
||||
"topic": {
|
||||
// the room's topic can be changed if the room id has "public" in it
|
||||
".write": "$room_id.contains('public')"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
[User-based rules](https://www.firebase.com/docs/web/guide/user-security.html).
|
||||
|
||||
Use `uid` from Simple Login which is a string ID guaranteed to be unique across all providers.
|
||||
|
||||
Grant write access for this user.
|
||||
|
||||
{
|
||||
"rules": {
|
||||
"users": {
|
||||
"$user_id": {
|
||||
// grants write access to the owner of this user account
|
||||
// whose uid must exactly match the key ($user_id)
|
||||
".write": "$user_id === auth.uid",
|
||||
|
||||
"email": {
|
||||
// an email is only allowed in the profile if it matches
|
||||
// the auth token's email account (for Google or password auth)
|
||||
".validate": "newData.val() === auth.email"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
We want repos to have a 1 to many users mapping. This way changes in one get propagated to others. The issue is that users may be kicked from a project in which case they can't see the cached stats for a repo.
|
||||
|
||||
We can get [repositories](https://developer.github.com/v3/repos/) for a user, but we have to get orgs too and get repos there again.
|
||||
|
||||
###Getting latest repo changes
|
||||
|
||||
Only users that have a `user` timestamp on repos < 30m (our config) can receive updates from repos. Otherwise we try to fetch the latest permissions from GitHub with a x minute/second retry.
|
||||
|
||||
We get the latest data from GitHub if our data is > 30s old (user configured). Then we broadcast latest changes to all other users (including us) updating the `age` timestamp on the repo. Receiving updates resets the user-set timeout.
|
||||
|
||||
Since we do not have control over GitHub repos, we need to take care of all situations that can arise:
|
||||
|
||||
1. Repo gives us 404 (does not exist or we don't have perms): remove user from `repo`.
|
||||
1. Repo gives us success: add user to the `repo`; trigger a poll if needed to fetch latest data
|
||||
1. GitHub times out: set a system `status` message to all
|
||||
1. We run out of requests we can make: show a message to the user, similar to GitHub timeout but only to that one specific user
|
||||
|
||||
[GitHub shows 404](https://developer.github.com/v3/troubleshooting/#why-am-i-getting-a-404-error-on-a-repository-that-exists) when we don't have access OR repo does not exist.
|
||||
|
||||
Keep track of last update to a repo so we can clear old projects (later, as needed).
|
||||
|
||||
Only use repo name when we are adding the user to the repo, from there on use the repo `id` which will be preserved even if the repo is renamed. But the [milestones API](https://developer.github.com/v3/issues/milestones/) does not use the `id` :(, in which case we would show 404 and let the user delete this and add a new one. Alternatively, try to fetch the new repo name from GitHub after making a query to get the repo by its `id`:
|
||||
|
||||
GET /repositories/:id
|
||||
|
||||
When fetching the issues, we can constrain on a `milestone` and `state`.
|
||||
|
||||
**Vulnerability**: if we share repos between users, one of them can write whatever change she wants and *spoil* the chart for others. Until we fix this, let us have a 1 repo to 1 user mapping.
|
||||
|
||||
##Design
|
||||
|
||||
###Adding a new user
|
||||
|
||||
- [ ] we get a `user` object from GH
|
||||
- [ ] get a list of repos from FB by asking for our `user` root
|
||||
- [ ] *a*: user is not there so let us create this root object
|
||||
- [ ] *b*: user is there so we get back a list of repos
|
||||
|
||||
###Adding a new repo
|
||||
|
||||
- [ ] make a request to GH fetching a repo by `user/repo`
|
||||
- [ ] *a*: GH gives us 404 - show a message to the user
|
||||
- [ ] *b1*: we get back a repo object, so a write into our `user` root as a `set()` operation (overriding any existing entry if it exists)
|
||||
- [ ] *b2*: in client register our repo to receive updates from FB and since it is new - it triggers a fetch from GH immediately
|
||||
|
||||
Have the following [script](http://www.google.com/url?q=http%3A%2F%2Fjsfiddle.net%2Fkatowulf%2F5ESSp%2F&sa=D&sntz=1&usg=AFQjCNGCBxXSIqExOhOjtjSExWsrwmN8cQ) check that `private` repos are allowed:
|
||||
|
||||
"$user": {
|
||||
repos: {
|
||||
"$private": {
|
||||
".validate": "($private == false) || subscribers.$user != null"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
###Updating a repo
|
||||
|
||||
- [ ] listen for our `user`, `repo` changes from FB which actually will render new data
|
||||
- [ ] our local repo object has an `age` information, if it reaches a threshold, trigger a fetch from GH
|
||||
- [ ] *a*: GH gives us 404 - show a message to the user saying last `state` on the repo, e.g. last success 5 minutes ago, keep showing the *old* data if any
|
||||
- [ ] *b*: GH gives us data, make an `update()` on FB saying `state` is `null` (OK) updating the `age` to time now
|
||||
|
||||
###Deleting a repo
|
||||
|
||||
- [ ] remove our `repo` under the `user`, no questions asked. All subscribers are switched off and views disposed of
|
||||
|
||||
###Deleting a user
|
||||
|
||||
- [ ] execute a `remove()` in FB if our tokens match for a user, will remove all repos too
|
||||
|
||||
###Upgrading an account to private repos
|
||||
|
||||
Private repos (extra `scope` in FB login) are part of a paid plan. Need to recognize that a user has an active paid account with us, before using the extended scope.
|
||||
|
||||
GH repositories have a `private` flag.
|
||||
|
||||
Since we do not *trust* users it is I that need to be upgrading users, at the same time it needs to be automatic.
|
||||
|
||||
We should not kill a user if they are no longer paid, maybe they got behind a payment, just disable latest data from private repos.
|
||||
|
||||
Set the private scope on all auth and put the burden on me to proove who has paid for an account or not, since someone could send a request to FB saying that a repo is public when it is not.
|
||||
|
||||
I can run a script once in a while to see whose repo returns 404 when it is set as `private = false`, put the burden on me to prove.
|
||||
|
||||
Using a free instance of [IronWorker](http://dev.iron.io/worker/reference/environment/#maximum_run_time_per_worker) and assuming 5s runtime each time gives us a poll every 6 minutes.
|
||||
|
||||
[Zapier](https://zapier.com/zapbook/firebase/stripe/) would poll every 15 minutes but already integrates Stripe and FB.
|
||||
|
||||
Because security rules cannot override existing rules, we need to separate the table of subscribers from saving the info on the user herself.
|
||||
|
||||
People buy subscriptions that extends their expiry date. This expiry date is calculated and set by the worker who adds together all subscriptions to come up with an *end date*.
|
||||
|
||||
One can use `Firebase.ServerValue.TIMESTAMP` for accurate timestamping.
|
||||
|
||||
- [ ] fetch updates for a `private` repo only if our user has a `plan` flag set to `business` or whatever
|
||||
- [ ] use a JS library to allow Stripe payment processing; people submit their card details and we get a Stripe `token` back. Save this token and on FB under `payments/processing/user` collection (*dirty*).
|
||||
- [ ] have a worker process the `payments/processing/user` ever 6 minutes or faster via IronWorker, once processed, move the payment into `payments/processed/user` collection that is writable only with our admin token and is read-only for the user
|
||||
- [ ] run an extra worker to check for for repos that return 404 when user is on an `open-source` plan; this is to find cheaters
|
||||
- [ ] run an extra worker that checks for `business` plans and if we have `payments` for these or not
|
||||
- [ ] show user a list of her subscription purchases, that shows the state of the processing as workers go through these. She needs to see a due date so assume all purchases went through and do a date calculation on the client
|
||||
|
||||
The following [approach](http://stackoverflow.com/a/21220875/105707) will allow write access to certain paths by a worker:
|
||||
|
||||
|
||||
var FirebaseTokenGenerator = require("firebase-token-generator");
|
||||
var tokenGenerator = new FirebaseTokenGenerator(YOUR_FIREBASE_SECRET);
|
||||
var token = tokenGenerator.createToken({ 'isWorker': true }, { 'expires': 0 });
|
||||
|
||||
{
|
||||
"rules:
|
||||
".read": "auth.isWorker === true"
|
||||
}
|
||||
|
||||
##Components Architecture
|
||||
|
||||
1. **Views** (components) orchestrate user input, this could be coming from browser events but also 3rd party data sources like GitHub.
|
||||
1. Ractive **Models** communicate among themselves via Mediator and are observed by Views.
|
||||
1. **Persistence** layer has modules that communicate with `Firebase` and `localForage` to persist Model data in the browser or in a remote db.
|
||||
If someone stops using the app, send them an email asking them for a good time to call so I can make things right. They would get 3 months usage as well.
|
|
@ -1,5 +1,7 @@
|
|||
#Tasks to do
|
||||
|
||||
- [ ] watch CSS too
|
||||
- [ ] add some product screenshots
|
||||
- [ ] create notes about how original people can upgrade to burnchart
|
||||
- [ ] clean up docs, track them on git or using Assembly system?
|
||||
- [ ] rename repo to burnchart
|
||||
|
@ -13,7 +15,7 @@
|
|||
|
||||
##Next Release
|
||||
|
||||
- [ ] http://burnchart.io#rails I would expect it to list all the projects for that owner so I can select one of them (Ryan)
|
||||
- [ ] http://burnchart.io#rails I would expect it to list all the projects for that owner so I can select one of them (Ryan); we could show a list of available project names with their: `description`, `private` flag and `has_issues` making the project greyed out if no issues found
|
||||
|
||||
##Backlog
|
||||
|
||||
|
|
|
@ -4,8 +4,6 @@ Ractive = require 'ractive'
|
|||
require 'ractive-transitions-fade'
|
||||
require 'ractive-ractive'
|
||||
|
||||
# Lodash mixins.
|
||||
require './utils/mixins.coffee'
|
||||
# Will load projects from localStorage.
|
||||
require './models/projects.coffee'
|
||||
|
||||
|
|
|
@ -8,9 +8,16 @@ module.exports =
|
|||
|
||||
# Fetch issues for a milestone.
|
||||
fetchAll: (repo, cb) ->
|
||||
# Calculate size of either open or closed issues.
|
||||
# Modifies issues by ref.
|
||||
calcSize = (list, cb) ->
|
||||
# For each `open` and `closed` issues in parallel.
|
||||
async.parallel [
|
||||
_.partial async.waterfall, [ _.partial(oneStatus, repo, 'open'), calcSize ]
|
||||
_.partial async.waterfall, [ _.partial(oneStatus, repo, 'closed'), calcSize ]
|
||||
], (err, [ open, closed ]) ->
|
||||
cb err, { open, closed }
|
||||
|
||||
# Calculate size of either open or closed issues.
|
||||
# Modifies issues by ref.
|
||||
calcSize = (list, cb) ->
|
||||
switch config.data.chart.points
|
||||
when 'ONE_SIZE'
|
||||
size = list.length
|
||||
|
@ -42,8 +49,8 @@ module.exports =
|
|||
|
||||
cb null, { list, size }
|
||||
|
||||
# For each state...
|
||||
oneStatus = (state, cb) ->
|
||||
# For each state...
|
||||
oneStatus = (repo, state, cb) ->
|
||||
# Concat them here.
|
||||
results = []
|
||||
|
||||
|
@ -60,10 +67,3 @@ module.exports =
|
|||
return cb null, results if data.length < 100
|
||||
# Fetch the next page then.
|
||||
fetchPage page + 1
|
||||
|
||||
# For each `open` and `closed` issues in parallel.
|
||||
async.parallel [
|
||||
_.partial async.waterfall, [ _.partial(oneStatus, 'open'), calcSize ]
|
||||
_.partial async.waterfall, [ _.partial(oneStatus, 'closed'), calcSize ]
|
||||
], (err, [ open, closed ]) ->
|
||||
cb err, { open, closed }
|
|
@ -1,6 +1,9 @@
|
|||
_ = require 'lodash'
|
||||
superagent = require 'superagent'
|
||||
|
||||
# Lodash mixins.
|
||||
require '../../utils/mixins.coffee'
|
||||
|
||||
user = require '../../models/user.coffee'
|
||||
|
||||
# Custom JSON parser.
|
||||
|
@ -79,7 +82,7 @@ request = ({ protocol, host, path, query, headers }, cb) ->
|
|||
q = if query then '?' + ( "#{k}=#{v}" for k, v of query ).join('&') else ''
|
||||
|
||||
# The URI.
|
||||
req = superagent.get("#{protocol}://#{host}#{path}#{q}")
|
||||
req = superagent.get "#{protocol}://#{host}#{path}#{q}"
|
||||
# Add headers.
|
||||
( req.set(k, v) for k, v of headers )
|
||||
|
||||
|
@ -87,7 +90,7 @@ request = ({ protocol, host, path, query, headers }, cb) ->
|
|||
timeout = setTimeout ->
|
||||
exited = yes
|
||||
cb 'Request has timed out'
|
||||
, 1e4 # give us 10s
|
||||
, 5e3 # give us 5s
|
||||
|
||||
# Send.
|
||||
req.end (err, data) ->
|
||||
|
|
|
@ -0,0 +1,233 @@
|
|||
proxy = do require('proxyquire').noCallThru
|
||||
assert = require 'assert'
|
||||
path = require 'path'
|
||||
|
||||
request = {}
|
||||
|
||||
issues = proxy path.resolve(__dirname, '../src/modules/github/issues.coffee'),
|
||||
'./request.coffee': request
|
||||
|
||||
config = require '../src/models/config.coffee'
|
||||
|
||||
repo = { 'owner': 'radekstepan', 'name': 'burnchart', 'milestone': 1 }
|
||||
|
||||
module.exports =
|
||||
|
||||
'issues - all empty': (done) ->
|
||||
called = 0
|
||||
request.allIssues = (repo, opts, cb) ->
|
||||
called += 1
|
||||
cb null, []
|
||||
|
||||
config.set 'chart.points', 'ONE_SIZE'
|
||||
|
||||
issues.fetchAll repo, (err, { open, closed }) ->
|
||||
assert.ifError err
|
||||
assert.equal called, 2
|
||||
assert.equal open.size, 0
|
||||
assert.equal closed.size, 0
|
||||
do done
|
||||
|
||||
'issues - open empty': (done) ->
|
||||
called = 0
|
||||
request.allIssues = (repo, opts, cb) ->
|
||||
called += 1
|
||||
cb null, if called is 1 then [] else [
|
||||
{ number: 1 }
|
||||
]
|
||||
|
||||
config.set 'chart.points', 'ONE_SIZE'
|
||||
|
||||
issues.fetchAll repo, (err, { open, closed }) ->
|
||||
assert.ifError err
|
||||
assert.equal called, 2
|
||||
assert.equal open.size, 0
|
||||
assert.equal open.list.length, 0
|
||||
assert.equal closed.size, 1
|
||||
assert.equal closed.list.length, 1
|
||||
do done
|
||||
|
||||
'issues - closed empty': (done) ->
|
||||
called = 0
|
||||
request.allIssues = (repo, opts, cb) ->
|
||||
called += 1
|
||||
cb null, if called is 2 then [] else [
|
||||
{ number: 1 }
|
||||
]
|
||||
|
||||
config.set 'chart.points', 'ONE_SIZE'
|
||||
|
||||
issues.fetchAll repo, (err, { open, closed }) ->
|
||||
assert.ifError err
|
||||
assert.equal called, 2
|
||||
assert.equal open.size, 1
|
||||
assert.equal closed.size, 0
|
||||
do done
|
||||
|
||||
'issues - both not empty': (done) ->
|
||||
called = 0
|
||||
request.allIssues = (repo, opts, cb) ->
|
||||
called += 1
|
||||
cb null, [ { number: 1 } ]
|
||||
|
||||
config.set 'chart.points', 'ONE_SIZE'
|
||||
|
||||
issues.fetchAll repo, (err, { open, closed }) ->
|
||||
assert.ifError err
|
||||
assert.equal called, 2
|
||||
assert.equal open.size, 1
|
||||
assert.equal closed.size, 1
|
||||
do done
|
||||
|
||||
'issues - 99 results on a page': (done) ->
|
||||
called = 0
|
||||
request.allIssues = (repo, opts, cb) ->
|
||||
called += 1
|
||||
cb null, ( { number: i } for i in [ 0...99 ] )
|
||||
|
||||
config.set 'chart.points', 'ONE_SIZE'
|
||||
|
||||
issues.fetchAll repo, (err, { open, closed }) ->
|
||||
assert.ifError err
|
||||
assert.equal called, 2
|
||||
assert.equal open.size, 99
|
||||
assert.equal closed.size, 99
|
||||
do done
|
||||
|
||||
'issues - 100 results on a page': (done) ->
|
||||
called = 0
|
||||
request.allIssues = (repo, opts, cb) ->
|
||||
called += 1
|
||||
assert opts.page in [ 1, 2 ]
|
||||
cb null, if opts.page is 1 then ( { number: i } for i in [ 0...100 ] ) else []
|
||||
|
||||
config.set 'chart.points', 'ONE_SIZE'
|
||||
|
||||
issues.fetchAll repo, (err, { open, closed }) ->
|
||||
assert.ifError err
|
||||
assert.equal called, 4
|
||||
assert.equal open.size, 100
|
||||
assert.equal closed.size, 100
|
||||
do done
|
||||
|
||||
'issues - 101 total results': (done) ->
|
||||
called = 0
|
||||
request.allIssues = (repo, opts, cb) ->
|
||||
called += 1
|
||||
assert opts.page in [ 1, 2 ]
|
||||
cb null, if opts.page is 1
|
||||
( { number: i } for i in [ 0...100 ] )
|
||||
else
|
||||
[ { number: 100 } ]
|
||||
|
||||
config.set 'chart.points', 'ONE_SIZE'
|
||||
|
||||
issues.fetchAll repo, (err, { open, closed }) ->
|
||||
assert.ifError err
|
||||
assert.equal called, 4
|
||||
assert.equal open.size, 101
|
||||
assert.equal closed.size, 101
|
||||
assert.deepEqual open.list[100], { number: 100, size: 1 }
|
||||
assert.deepEqual closed.list[100], { number: 100, size: 1 }
|
||||
do done
|
||||
|
||||
'issues - 201 total results': (done) ->
|
||||
called = 0
|
||||
request.allIssues = (repo, opts, cb) ->
|
||||
called += 1
|
||||
assert opts.page in [ 1, 2, 3 ]
|
||||
cb null, if opts.page in [ 1, 2 ]
|
||||
( { number: i } for i in [ (h = 100 * (opts.page - 1))...h + 100 ] )
|
||||
else
|
||||
[ { number: 200 } ]
|
||||
|
||||
config.set 'chart.points', 'ONE_SIZE'
|
||||
|
||||
issues.fetchAll repo, (err, { open, closed }) ->
|
||||
assert.ifError err
|
||||
assert.equal called, 6
|
||||
assert.equal open.size, 201
|
||||
assert.equal closed.size, 201
|
||||
for { list } in [ open, closed ]
|
||||
for j in [ 100, 200 ]
|
||||
assert.deepEqual list[j], { number: j, size: 1 }
|
||||
do done
|
||||
|
||||
'issues - get all when not found': (done) ->
|
||||
called = 0
|
||||
request.allIssues = (repo, opts, cb) ->
|
||||
called += 1
|
||||
cb 'Not Found'
|
||||
|
||||
config.set 'chart.points', 'ONE_SIZE'
|
||||
|
||||
issues.fetchAll repo, (err, { open, closed }) ->
|
||||
assert.equal err, 'Not Found'
|
||||
assert.equal called, 1
|
||||
do done
|
||||
|
||||
'issues - size based on a label': (done) ->
|
||||
config.set 'chart.points', 'LABELS'
|
||||
|
||||
request.allIssues = (repo, opts, cb) ->
|
||||
cb null, [
|
||||
{ labels: [ { name: 'size 2' } ] }
|
||||
{ labels: [ { name: 'size 10' } ] }
|
||||
{ labels: [ { name: 'size A' } ] }
|
||||
]
|
||||
|
||||
issues.fetchAll repo, (err, { open, closed }) ->
|
||||
assert.ifError err
|
||||
assert.equal open.size, 12
|
||||
assert.equal open.list[0].size, 2
|
||||
do done
|
||||
|
||||
'issues - filter when no labels': (done) ->
|
||||
config.set 'chart.points', 'LABELS'
|
||||
|
||||
request.allIssues = (repo, opts, cb) ->
|
||||
cb null, [ { } ]
|
||||
|
||||
issues.fetchAll repo, (err, { open, closed }) ->
|
||||
assert.ifError err
|
||||
assert.equal open.size, 0
|
||||
do done
|
||||
|
||||
'issues - filter when empty labels': (done) ->
|
||||
config.set 'chart.points', 'LABELS'
|
||||
|
||||
request.allIssues = (repo, opts, cb) ->
|
||||
cb null, [ { labels: [] } ]
|
||||
|
||||
issues.fetchAll repo, (err, { open, closed }) ->
|
||||
assert.ifError err
|
||||
assert.equal open.size, 0
|
||||
do done
|
||||
|
||||
'issues - filter when not matching regex': (done) ->
|
||||
config.set 'chart.points', 'LABELS'
|
||||
|
||||
request.allIssues = (repo, opts, cb) ->
|
||||
cb null, [ { labels: [ { name: 'size 1A' } ] } ]
|
||||
|
||||
issues.fetchAll repo, (err, { open, closed }) ->
|
||||
assert.ifError err
|
||||
assert.equal open.size, 0
|
||||
do done
|
||||
|
||||
'issues - filter when multiple match the regex': (done) ->
|
||||
config.set 'chart.points', 'LABELS'
|
||||
|
||||
request.allIssues = (repo, opts, cb) ->
|
||||
cb null, [
|
||||
{ labels: [ { name: 'size 1' }, { name: 'size 6' } ] }
|
||||
{ labels: [ { name: 'size really big' }, { name: 'size 4' } ] }
|
||||
]
|
||||
|
||||
issues.fetchAll repo, (err, { open, closed }) ->
|
||||
assert.ifError err
|
||||
assert.equal open.size, 11
|
||||
[ a, b ] = open.list
|
||||
assert.equal a.size, 7
|
||||
assert.equal b.size, 4
|
||||
do done
|
|
@ -1,196 +0,0 @@
|
|||
#!/usr/bin/env coffee
|
||||
proxy = do require('proxyquire').noCallThru
|
||||
assert = require 'assert'
|
||||
path = require 'path'
|
||||
|
||||
req = {}
|
||||
|
||||
regex = require path.resolve(__dirname, '../src/modules/regex.coffee')
|
||||
|
||||
issues = proxy path.resolve(__dirname, '../src/modules/issues.coffee'),
|
||||
'./request': req
|
||||
'./require':
|
||||
'_': require 'lodash'
|
||||
'superagent': null
|
||||
'd3': null
|
||||
'async': require 'async'
|
||||
'marked': null
|
||||
|
||||
repo = { 'milestone': { 'number': no } }
|
||||
|
||||
module.exports =
|
||||
|
||||
'issues - all empty': (done) ->
|
||||
called = 0
|
||||
req.all_issues = (repo, opts, cb) ->
|
||||
called += 1
|
||||
cb null, []
|
||||
|
||||
issues.get_all repo, (err, [ open, closed ]) ->
|
||||
assert.ifError err
|
||||
assert.equal called, 2
|
||||
assert.equal open.length, 0
|
||||
assert.equal closed.length, 0
|
||||
do done
|
||||
|
||||
'issues - open empty': (done) ->
|
||||
called = 0
|
||||
req.all_issues = (repo, opts, cb) ->
|
||||
called += 1
|
||||
cb null, if called is 1 then [] else [
|
||||
{ number: 1 }
|
||||
]
|
||||
|
||||
issues.get_all repo, (err, [ open, closed ]) ->
|
||||
assert.ifError err
|
||||
assert.equal called, 2
|
||||
assert.equal open.length, 0
|
||||
assert.equal closed.length, 1
|
||||
do done
|
||||
|
||||
'issues - closed empty': (done) ->
|
||||
called = 0
|
||||
req.all_issues = (repo, opts, cb) ->
|
||||
called += 1
|
||||
cb null, if called is 2 then [] else [
|
||||
{ number: 1 }
|
||||
]
|
||||
|
||||
issues.get_all repo, (err, [ open, closed ]) ->
|
||||
assert.ifError err
|
||||
assert.equal called, 2
|
||||
assert.equal open.length, 1
|
||||
assert.equal closed.length, 0
|
||||
do done
|
||||
|
||||
'issues - both not empty': (done) ->
|
||||
called = 0
|
||||
req.all_issues = (repo, opts, cb) ->
|
||||
called += 1
|
||||
cb null, [ { number: 1 } ]
|
||||
|
||||
issues.get_all repo, (err, [ open, closed ]) ->
|
||||
assert.ifError err
|
||||
assert.equal called, 2
|
||||
assert.equal open.length, 1
|
||||
assert.equal closed.length, 1
|
||||
do done
|
||||
|
||||
'issues - 99 results on a page': (done) ->
|
||||
called = 0
|
||||
req.all_issues = (repo, opts, cb) ->
|
||||
called += 1
|
||||
cb null, ( { number: i } for i in [ 0...99 ] )
|
||||
|
||||
issues.get_all repo, (err, [ open, closed ]) ->
|
||||
assert.ifError err
|
||||
assert.equal called, 2
|
||||
assert.equal open.length, 99
|
||||
assert.equal closed.length, 99
|
||||
do done
|
||||
|
||||
'issues - 100 results on a page': (done) ->
|
||||
called = 0
|
||||
req.all_issues = (repo, opts, cb) ->
|
||||
called += 1
|
||||
assert opts.page in [ 1, 2 ]
|
||||
cb null, if opts.page is 1 then ( { number: i } for i in [ 0...100 ] ) else []
|
||||
|
||||
issues.get_all repo, (err, [ open, closed ]) ->
|
||||
assert.ifError err
|
||||
assert.equal called, 4
|
||||
assert.equal open.length, 100
|
||||
assert.equal closed.length, 100
|
||||
do done
|
||||
|
||||
'issues - 101 total results': (done) ->
|
||||
called = 0
|
||||
req.all_issues = (repo, opts, cb) ->
|
||||
called += 1
|
||||
assert opts.page in [ 1, 2 ]
|
||||
cb null, if opts.page is 1
|
||||
( { number: i } for i in [ 0...100 ] )
|
||||
else
|
||||
[ { number: 100 } ]
|
||||
|
||||
issues.get_all repo, (err, [ open, closed ]) ->
|
||||
assert.ifError err
|
||||
assert.equal called, 4
|
||||
assert.equal open.length, 101
|
||||
assert.equal closed.length, 101
|
||||
assert.deepEqual open[100], { number: 100 }
|
||||
assert.deepEqual closed[100], { number: 100 }
|
||||
do done
|
||||
|
||||
'issues - 201 total results': (done) ->
|
||||
called = 0
|
||||
req.all_issues = (repo, opts, cb) ->
|
||||
called += 1
|
||||
assert opts.page in [ 1, 2, 3 ]
|
||||
cb null, if opts.page in [ 1, 2 ]
|
||||
( { number: i } for i in [ (h = 100 * (opts.page - 1))...h + 100 ] )
|
||||
else
|
||||
[ { number: 200 } ]
|
||||
|
||||
issues.get_all repo, (err, [ open, closed ]) ->
|
||||
assert.ifError err
|
||||
assert.equal called, 6
|
||||
assert.equal open.length, 201
|
||||
assert.equal closed.length, 201
|
||||
for i in [ open, closed ]
|
||||
for j in [ 100, 200 ]
|
||||
assert.deepEqual i[j], { number: j }
|
||||
do done
|
||||
|
||||
'issues - get all when not found': (done) ->
|
||||
called = 0
|
||||
req.all_issues = (repo, opts, cb) ->
|
||||
called += 1
|
||||
cb 'Not Found'
|
||||
|
||||
issues.get_all repo, (err, [ open, closed ]) ->
|
||||
assert.equal err, 'Not Found'
|
||||
assert.equal called, 1
|
||||
do done
|
||||
|
||||
'issues - filter on existing label regex': (done) ->
|
||||
issues.filter [ { labels: [ { name: 'size 15' } ] } ]
|
||||
, regex.size_label, (err, data) ->
|
||||
assert.ifError err
|
||||
assert.equal data.length, 1
|
||||
assert.equal data[0].size, 15
|
||||
do done
|
||||
|
||||
'issues - filter when no labels': (done) ->
|
||||
issues.filter [ { } ]
|
||||
, regex.size_label, (err, data) ->
|
||||
assert.ifError err
|
||||
assert.equal data.length, 0
|
||||
do done
|
||||
|
||||
'issues - filter when empty labels': (done) ->
|
||||
issues.filter [ { labels: [] } ]
|
||||
, regex.size_label, (err, data) ->
|
||||
assert.ifError err
|
||||
assert.equal data.length, 0
|
||||
do done
|
||||
|
||||
'issues - filter when not matching regex': (done) ->
|
||||
issues.filter [ { labels: [ { name: 'size 1A' } ] } ]
|
||||
, regex.size_label, (err, data) ->
|
||||
assert.ifError err
|
||||
assert.equal data.length, 0
|
||||
do done
|
||||
|
||||
'issues - filter when multiple match the regex': (done) ->
|
||||
issues.filter [
|
||||
{ labels: [ { name: 'size 1' }, { name: 'size 6' } ] }
|
||||
{ labels: [ { name: 'size really big' }, { name: 'size 4' } ] }
|
||||
]
|
||||
, regex.size_label, (err, data) ->
|
||||
assert.ifError err
|
||||
assert.equal data.length, 2
|
||||
[ a, b ] = data
|
||||
assert.equal a.size, 7
|
||||
assert.equal b.size, 4
|
||||
do done
|
|
@ -1,4 +1,3 @@
|
|||
#!/usr/bin/env coffee
|
||||
proxy = do require('proxyquire').noCallThru
|
||||
assert = require 'assert'
|
||||
path = require 'path'
|
||||
|
|
|
@ -23,86 +23,115 @@ class Sa
|
|||
cb null, @response
|
||||
, @timeout
|
||||
|
||||
superagent = new Sa()
|
||||
|
||||
# Proxy the superagent lib.
|
||||
request = proxy path.resolve(__dirname, '../src/modules/github/request.coffee'),
|
||||
'../vendor.coffee':
|
||||
'superagent': new Sa()
|
||||
'superagent': superagent
|
||||
|
||||
# User is ready, make the requests.
|
||||
user = require '../src/models/user.coffee'
|
||||
user.set 'ready', yes
|
||||
|
||||
module.exports =
|
||||
|
||||
'request - all milestones (ok)': (done) ->
|
||||
sa.response =
|
||||
superagent.response =
|
||||
'statusType': 2
|
||||
'error': no
|
||||
'body': [ null ]
|
||||
|
||||
request.allMilestones {}, (err, data) ->
|
||||
owner = 'radekstepan'
|
||||
name = 'burnchart'
|
||||
|
||||
request.allMilestones { owner, name }, (err, data) ->
|
||||
assert.ifError err
|
||||
assert.deepEqual sa.params,
|
||||
'uri': 'undefined://undefined/repos/undefined/milestones?state=open&sort=due_date&direction=asc'
|
||||
assert.deepEqual superagent.params,
|
||||
'uri': 'https://api.github.com/repos/radekstepan/burnchart/milestones?state=open&sort=due_date&direction=asc'
|
||||
'Content-Type': 'application/json',
|
||||
'Accept': 'application/vnd.github.v3'
|
||||
assert.deepEqual data, [ null ]
|
||||
do done
|
||||
|
||||
'request - one milestone (ok)': (done) ->
|
||||
sa.response =
|
||||
superagent.response =
|
||||
'statusType': 2
|
||||
'error': no
|
||||
'body': [ null ]
|
||||
|
||||
request.oneMilestone {}, 1, (err, data) ->
|
||||
owner = 'radekstepan'
|
||||
name = 'burnchart'
|
||||
milestone = 1
|
||||
|
||||
request.oneMilestone { owner, name, milestone }, (err, data) ->
|
||||
assert.ifError err
|
||||
assert.deepEqual sa.params,
|
||||
'uri': 'undefined://undefined/repos/undefined/milestones/1?state=open&sort=due_date&direction=asc'
|
||||
assert.deepEqual superagent.params,
|
||||
'uri': 'https://api.github.com/repos/radekstepan/burnchart/milestones/1?state=open&sort=due_date&direction=asc'
|
||||
'Content-Type': 'application/json',
|
||||
'Accept': 'application/vnd.github.v3'
|
||||
assert.deepEqual data, [ null ]
|
||||
do done
|
||||
|
||||
'request - one milestone (404)': (done) ->
|
||||
sa.response =
|
||||
superagent.response =
|
||||
'statusType': 4
|
||||
'error': Error "cannot GET undefined (404)"
|
||||
'body':
|
||||
'documentation_url': "http://developer.github.com/v3"
|
||||
'message': "Not Found"
|
||||
|
||||
request.oneMilestone {}, 9, (err) ->
|
||||
owner = 'radekstepan'
|
||||
name = 'burnchart'
|
||||
milestone = 0
|
||||
|
||||
request.oneMilestone { owner, name, milestone }, (err) ->
|
||||
assert.equal err, 'Not Found'
|
||||
do done
|
||||
|
||||
'request - one milestone (500)': (done) ->
|
||||
sa.response =
|
||||
superagent.response =
|
||||
'statusType': 5
|
||||
'error': Error "Error"
|
||||
'body': null
|
||||
|
||||
request.oneMilestone {}, 9, (err) ->
|
||||
owner = 'radekstepan'
|
||||
name = 'burnchart'
|
||||
milestone = 0
|
||||
|
||||
request.oneMilestone { owner, name, milestone }, (err) ->
|
||||
assert.equal err, 'Error'
|
||||
do done
|
||||
|
||||
'request - all issues (ok)': (done) ->
|
||||
sa.response =
|
||||
superagent.response =
|
||||
'statusType': 2
|
||||
'error': no
|
||||
'body': [ null ]
|
||||
|
||||
request.allIssues {}, {}, (err, data) ->
|
||||
owner = 'radekstepan'
|
||||
name = 'burnchart'
|
||||
milestone = 0
|
||||
|
||||
request.allIssues { owner, name, milestone }, {}, (err, data) ->
|
||||
assert.ifError err
|
||||
assert.deepEqual sa.params,
|
||||
'uri': 'undefined://undefined/repos/undefined/issues?per_page=100'
|
||||
assert.deepEqual superagent.params,
|
||||
'uri': 'https://api.github.com/repos/radekstepan/burnchart/issues?milestone=0&per_page=100'
|
||||
'Content-Type': 'application/json',
|
||||
'Accept': 'application/vnd.github.v3'
|
||||
assert.deepEqual data, [ null ]
|
||||
do done
|
||||
|
||||
'request - timeout': (done) ->
|
||||
sa.timeout = 10001
|
||||
sa.response =
|
||||
superagent.timeout = 5001
|
||||
superagent.response =
|
||||
'statusType': 2
|
||||
'error': no
|
||||
'body': [ null ]
|
||||
|
||||
request.allIssues {}, {}, (err) ->
|
||||
owner = 'radekstepan'
|
||||
name = 'burnchart'
|
||||
milestone = 0
|
||||
|
||||
request.allIssues { owner, name, milestone }, {}, (err) ->
|
||||
assert.equal err, 'Request has timed out'
|
||||
do done
|
Loading…
Reference in New Issue