Summary: Ex. When `console.log` or similar is called, this will call
the original method, which can be quite useful.
For example, under iOS, this will log to the Safari console debugger,
which has an expandable UI for inspecting objects, etc., and is also
just useful if you are using that as a REPL.
I don't believe this incurs a meaningful performance penalty unless
the console is open, but it would be easy to stick behind a flag
if that is a problem.
Closes https://github.com/facebook/react-native/pull/2486
Reviewed By: @svcscm
Differential Revision: D2472470
Pulled By: @vjeux
Summary: @public
We swallow errors like it's nobody's business :(
Having an error handler that `reject`s after the promise has been resolved is a no-op. In node, if there is no `error` event handler then the error would throw.
So after we start listening and we want to resolve the promise, we remove the error listener so that we make sure errors actually throw.
Finally, I made the `uncaughtError` handler log `error.stack` so we can get an idea of what's going on.
Reviewed By: @martinbigio
Differential Revision: D2468472
Summary: @public
I've noticed that the logs can be sometimes misleading, as when an Activity ends it doesn't log immediatly. A sync `console.log` would log before it although the Acitivity should've finished before.
Turns out we wait before writing out the logs to the console. I don't see any reason for this. Looking at the `Activity` module it's over-engineered. This diff makes logging sync and simplfies the module.
Reviewed By: @martinbigio
Differential Revision: D2467922
Summary: I think packager on different platform should generate same output if possible. So packager should replace '\\' in module name with '/' on Windows.
Closes https://github.com/facebook/react-native/pull/2813
Reviewed By: @svcscm
Differential Revision: D2458634
Pulled By: @martinbigio
Summary: @public
The server dies after 30 seconds if it has no jobs on it's queue. The problem is that the jobs counter gets decreased before returning the bytes to the client. As a consequence, it's possible that the server dies while it's returning the bytes to the client, or just after it finished returning the bytes to the client.
To avoid both issues lets move the counter decrease a few lines below and bump the timer to make sure we have time to fully write the bytes on the socket and let the client close the connection before the server dies.
Reviewed By: @vjeux
Differential Revision: D2445264
Summary: @public
Invoking an extra promise caused failures in the promise-based tests. This fixes them.
Reviewed By: @vjeux
Differential Revision: D2432431
Summary: how did this ever work?
All build jobs must pass in the platform argument.
This also turns the "platform" argument into a required one.
I added a task to infer the platform argument from the filename here: t8306875
Reviewed By: @martinbigio
Differential Revision: D2425114
Summary: When we updated joi, the error message was changed. I removed the content to prevent similar errors in the future.
Reviewed By: @amasad
Differential Revision: D2424048
Summary: @public
The profiler helper shouldn't live inside the packager itself, move
it to the packager.js file with other middlewares.
Reviewed By: @martinbigio
Differential Revision: D2424878
Summary:
1. When the server starts up, it only gives itself 30 second to live before receiving any connections/jobs
2. There is a startup cost with starting the server and handshaking
3. The server dies before the client has a chance to connect to it
Solution:
1. While the server should die pretty fast after it's done it's work, we should have a longer timeout for starting it
2. I also added accompanying server logs with client connection errors
Summary:
We don't currently support platform extensions in asset modules.
This adds supports for it:
```
require('./a.png');
```
Will require 'a.ios.png' if it exists and 'a.png' if it doesn't.
Summary:
Saw an issue with a build because of an ENONT error: https://fb.facebook.com/groups/716936458354972/permalink/923628747685741/
My hypothesis:
1. We issue a ping to the socket (in SocketInterface/index.js) a decides if the available socket is alive
2. We see that it's alive but by the time we actually connect to it the server would've died
Solution:
1. The server shouldn't die as long as there are clients connected to it (currently it only stay alive as long as there are jobs)
2. The "ping" should only disconnect once the client is connected
3. Finally, have a better error message than ENOENT
Summary:
Sourcemap urls were generated as just the pathname (no options) which meant that they generated source for the wrong bundle.
Even worse, there exists a race condition when multiple request to the same bundle has different types of paltform arguments (in this case one could be 'ios' and the other is undefined). The fix will this will come later as it's more involved -- will need to refactor the dependency resolver to have a per-request state.
Summary:
Fix failing test that matches the exact error string to match using `contains`.
I was under the impression that jest tests were running in CI -- turns out not yet.
Summary:
A few potential races to fix:
1. Multiple clients maybe racing to delete a zombie socket
2. Servers who should die because other servers are already listening are taking the socket with them (move `process.on('exit'` code to after the server is listening
3. Servers which are redundant should immediatly die