There was a condition checking if blockchain config was disabled and if so, do not generate any provider code, which is where the `__mainContext` was being defined. This was changed to generate the `__mainContext` code first, then if blockchain is disabled, return the already generated code.
Changed the online event to `once` and set it to be bound every time the node goes offline.
The above changes handle the case where:
1) `embark run` runs and starts geth.
2) geth is killed manually
3) `embark blockchain` is run in separate process to restart geth
4) the `embark run` process detects this change and restarts the web3 provider and recompiles/deploys/builds
Every time `embark blochain` is restarted, an error is appended and all are emitted from the `eth-block-tracker`. This is a bug but can't figure out where it originates. The downside is that if, for example, `embark blockchain` is restarted 4 times, there will be 4 errors emitted from the `eth-block-tracker`. Because of this, errors emitted from `eth-block-tracker` have been reduced to trace to avoid clogging the logs.
First case - run `embark run` which starts a blockchain node, then manually kill the `geth` process. Would throw `{ [Error: connect ECONNREFUSED 127.0.0.1:8543] message: 'connect ECONNREFUSED 127.0.0.1:8543', code: -32603 }` error and ruins the dashboard.
Second case, 1) run `embark blockchain` 2) run `embark run` 3) kill `embark blockchain` throws the error `{ [Error: connect ECONNREFUSED 127.0.0.1:8543] message: 'connect ECONNREFUSED 127.0.0.1:8543', code: -32603 }` and ruins the dashboard.
The first case was solved by having the child blockchain process that spawns geth listen for geth exit, then kill itself.
The second case required updating of `eth-block-tracker` to v4.0.1 inside of the `embark-web3-provider-engine`. v4.0.1 was a major version update and introduced breaking changes. Those changes were handled inside of `embark-web3-provider-engine`, covered in **blocker** PR https://github.com/jrainville/provider-engine/pull/1.