react-native/React/Base/RCTEventDispatcher.m

236 lines
6.3 KiB
Mathematica
Raw Normal View History

/**
* Copyright (c) 2015-present, Facebook, Inc.
* All rights reserved.
*
* This source code is licensed under the BSD-style license found in the
* LICENSE file in the root directory of this source tree. An additional grant
* of patent rights can be found in the PATENTS file in the same directory.
*/
#import "RCTEventDispatcher.h"
#import "RCTAssert.h"
#import "RCTBridge.h"
better event emitting II: no deadlocks Summary:D3092867 / 1d3db4c5dc8763d16f2d051fdf04a2976c0fb154 caused deadlock when chrome debugging was turned on, so it was reverted as D3128586 / 144dc3066144a48fc13bb7832abc9e645024fb88. The reason: I was calling `[_bridge dispatchBlock:^{ [self flushEventsQueue]; } queue:RCTJSThread];` from main thread and expecting it will `dispatch_async` to another, since a held lock was being accessed the dispatched block and was released after the dispatch. Turns out `RCTWebSocketExecutor` (which is used when chrome debugger is turned on) executes all blocks dispatched this way to `RCTJSThread` synchronously on the main thread. This resulted in a deadlock. The "dispatched" block was trying to acquired lock which held by the same thread in the dispatching phase. A fix for this is pretty simple. We will release the lock before dispatching the block. However it's not super straightforward to see this won't introduce some race condition in a case with two threads where we would end up with events not being processed. My thinking why that shouldn't happen goes like this: We could get in a bad state if `flushEventsQueue` would run on JS thread while `sendEvent:` is running on MT. (I don't have a specific example how, maybe it's not possible. However when I show this case is safe we know we are good.) The way how locking is setup in this diff the only possible scenario where these two threads would execute in these methods concurrently is JS holding the lock and MT going to enqueue another block on JS thread (since that's outside of "locked" zone). But this scenarion can never happen, since if MT is about to enqueue the block on JS thread it means there cannot be a not yet fully executed block on JS thread. Therefore nothing bad can happen. So this diff brings back the reverted diff and adds to it the fix for the deadlock. Reviewed By: javache Differential Revision: D3130375 fb-gh-sync-id: 885a166f2f808551d7cd4e4eb98634d26afe6a11 fbshipit-source-id: 885a166f2f808551d7cd4e4eb98634d26afe6a11
2016-04-03 06:30:26 +00:00
#import "RCTBridge+Private.h"
#import "RCTUtils.h"
better event emitting II: no deadlocks Summary:D3092867 / 1d3db4c5dc8763d16f2d051fdf04a2976c0fb154 caused deadlock when chrome debugging was turned on, so it was reverted as D3128586 / 144dc3066144a48fc13bb7832abc9e645024fb88. The reason: I was calling `[_bridge dispatchBlock:^{ [self flushEventsQueue]; } queue:RCTJSThread];` from main thread and expecting it will `dispatch_async` to another, since a held lock was being accessed the dispatched block and was released after the dispatch. Turns out `RCTWebSocketExecutor` (which is used when chrome debugger is turned on) executes all blocks dispatched this way to `RCTJSThread` synchronously on the main thread. This resulted in a deadlock. The "dispatched" block was trying to acquired lock which held by the same thread in the dispatching phase. A fix for this is pretty simple. We will release the lock before dispatching the block. However it's not super straightforward to see this won't introduce some race condition in a case with two threads where we would end up with events not being processed. My thinking why that shouldn't happen goes like this: We could get in a bad state if `flushEventsQueue` would run on JS thread while `sendEvent:` is running on MT. (I don't have a specific example how, maybe it's not possible. However when I show this case is safe we know we are good.) The way how locking is setup in this diff the only possible scenario where these two threads would execute in these methods concurrently is JS holding the lock and MT going to enqueue another block on JS thread (since that's outside of "locked" zone). But this scenarion can never happen, since if MT is about to enqueue the block on JS thread it means there cannot be a not yet fully executed block on JS thread. Therefore nothing bad can happen. So this diff brings back the reverted diff and adds to it the fix for the deadlock. Reviewed By: javache Differential Revision: D3130375 fb-gh-sync-id: 885a166f2f808551d7cd4e4eb98634d26afe6a11 fbshipit-source-id: 885a166f2f808551d7cd4e4eb98634d26afe6a11
2016-04-03 06:30:26 +00:00
#import "RCTProfile.h"
[ReactNative] TextInput bug fixes and features Summary: This introduces event counts to make sure JS doesn't set out of date values on native text inputs, which can cause dropped characters and can mess with autocomplete, and obviates the need for the input buffering which added lag and complexity to the component. Made sure to test simulated super-slow JS text event processing to make sure characters aren't dropped, as well as typing obviously correctable words and making sure autocomplete works as expected. TextInput is now a controlled input by default without causing any issues for most cases, so I removed the `controlled` prop. Fixes selection state jumping by restoring it after setting new text values, so highlighting the middle of some text in the new ReWrite example and hitting space will replace that selection with an underscore and keep the cursor at a sensible position as expected, instead of jumping to the end. Ads `maxLength` prop to support the most commonly needed syncronous behavior: preventing the user from typing too many characters. It can also be used to prevent users from continuing to type after entering special characters by changing it to the current length after a regex match. Made sure to verify it works well with pasted input (including in the middle of existing text), truncating it and collapsing the selection the same way it does on the web. Fixes bug in TextEventsExample where it wouldn't show the submit and end events, even though there were firing correctly.
2015-07-21 19:37:24 +00:00
const NSInteger RCTTextUpdateLagWarningThreshold = 3;
NSString *RCTNormalizeInputEventName(NSString *eventName)
{
if ([eventName hasPrefix:@"on"]) {
eventName = [eventName stringByReplacingCharactersInRange:(NSRange){0, 2} withString:@"top"];
} else if (![eventName hasPrefix:@"top"]) {
eventName = [[@"top" stringByAppendingString:[eventName substringToIndex:1].uppercaseString]
stringByAppendingString:[eventName substringFromIndex:1]];
}
return eventName;
}
static NSNumber *RCTGetEventID(id<RCTEvent> event)
{
return @(
event.viewTag.intValue |
(((uint64_t)event.eventName.hash & 0xFFFF) << 32) |
(((uint64_t)event.coalescingKey) << 48)
);
}
@implementation RCTEventDispatcher
{
// We need this lock to protect access to _events, _eventQueue and _eventsDispatchScheduled. It's filled in on main thread and consumed on js thread.
NSLock *_eventQueueLock;
// We have this id -> event mapping so we coalesce effectively.
NSMutableDictionary<NSNumber *, id<RCTEvent>> *_events;
// This array contains ids of events in order they come in, so we can emit them to JS in the exact same order.
NSMutableArray<NSNumber *> *_eventQueue;
better event emitting II: no deadlocks Summary:D3092867 / 1d3db4c5dc8763d16f2d051fdf04a2976c0fb154 caused deadlock when chrome debugging was turned on, so it was reverted as D3128586 / 144dc3066144a48fc13bb7832abc9e645024fb88. The reason: I was calling `[_bridge dispatchBlock:^{ [self flushEventsQueue]; } queue:RCTJSThread];` from main thread and expecting it will `dispatch_async` to another, since a held lock was being accessed the dispatched block and was released after the dispatch. Turns out `RCTWebSocketExecutor` (which is used when chrome debugger is turned on) executes all blocks dispatched this way to `RCTJSThread` synchronously on the main thread. This resulted in a deadlock. The "dispatched" block was trying to acquired lock which held by the same thread in the dispatching phase. A fix for this is pretty simple. We will release the lock before dispatching the block. However it's not super straightforward to see this won't introduce some race condition in a case with two threads where we would end up with events not being processed. My thinking why that shouldn't happen goes like this: We could get in a bad state if `flushEventsQueue` would run on JS thread while `sendEvent:` is running on MT. (I don't have a specific example how, maybe it's not possible. However when I show this case is safe we know we are good.) The way how locking is setup in this diff the only possible scenario where these two threads would execute in these methods concurrently is JS holding the lock and MT going to enqueue another block on JS thread (since that's outside of "locked" zone). But this scenarion can never happen, since if MT is about to enqueue the block on JS thread it means there cannot be a not yet fully executed block on JS thread. Therefore nothing bad can happen. So this diff brings back the reverted diff and adds to it the fix for the deadlock. Reviewed By: javache Differential Revision: D3130375 fb-gh-sync-id: 885a166f2f808551d7cd4e4eb98634d26afe6a11 fbshipit-source-id: 885a166f2f808551d7cd4e4eb98634d26afe6a11
2016-04-03 06:30:26 +00:00
BOOL _eventsDispatchScheduled;
NSMutableArray<id<RCTEventDispatcherObserver>> *_observers;
NSLock *_observersLock;
}
@synthesize bridge = _bridge;
RCT_EXPORT_MODULE()
Refactored module access to allow for lazy loading Summary: public The `bridge.modules` dictionary provides access to all native modules, but this API requires that every module is initialized in advance so that any module can be accessed. This diff introduces a better API that will allow modules to be initialized lazily as they are needed, and deprecates `bridge.modules` (modules that use it will still work, but should be rewritten to use `bridge.moduleClasses` or `-[bridge moduleForName/Class:` instead. The rules are now as follows: * Any module that overrides `init` or `setBridge:` will be initialized on the main thread when the bridge is created * Any module that implements `constantsToExport:` will be initialized later when the config is exported (the module itself will be initialized on a background queue, but `constantsToExport:` will still be called on the main thread. * All other modules will be initialized lazily when a method is first called on them. These rules may seem slightly arcane, but they have the advantage of not violating any assumptions that may have been made by existing code - any module written under the original assumption that it would be initialized synchronously on the main thread when the bridge is created should still function exactly the same, but modules that avoid overriding `init` or `setBridge:` will now be loaded lazily. I've rewritten most of the standard modules to take advantage of this new lazy loading, with the following results: Out of the 65 modules included in UIExplorer: * 16 are initialized on the main thread when the bridge is created * A further 8 are initialized when the config is exported to JS * The remaining 41 will be initialized lazily on-demand Reviewed By: jspahrsummers Differential Revision: D2677695 fb-gh-sync-id: 507ae7e9fd6b563e89292c7371767c978e928f33
2015-11-25 11:09:00 +00:00
- (void)setBridge:(RCTBridge *)bridge
{
Refactored module access to allow for lazy loading Summary: public The `bridge.modules` dictionary provides access to all native modules, but this API requires that every module is initialized in advance so that any module can be accessed. This diff introduces a better API that will allow modules to be initialized lazily as they are needed, and deprecates `bridge.modules` (modules that use it will still work, but should be rewritten to use `bridge.moduleClasses` or `-[bridge moduleForName/Class:` instead. The rules are now as follows: * Any module that overrides `init` or `setBridge:` will be initialized on the main thread when the bridge is created * Any module that implements `constantsToExport:` will be initialized later when the config is exported (the module itself will be initialized on a background queue, but `constantsToExport:` will still be called on the main thread. * All other modules will be initialized lazily when a method is first called on them. These rules may seem slightly arcane, but they have the advantage of not violating any assumptions that may have been made by existing code - any module written under the original assumption that it would be initialized synchronously on the main thread when the bridge is created should still function exactly the same, but modules that avoid overriding `init` or `setBridge:` will now be loaded lazily. I've rewritten most of the standard modules to take advantage of this new lazy loading, with the following results: Out of the 65 modules included in UIExplorer: * 16 are initialized on the main thread when the bridge is created * A further 8 are initialized when the config is exported to JS * The remaining 41 will be initialized lazily on-demand Reviewed By: jspahrsummers Differential Revision: D2677695 fb-gh-sync-id: 507ae7e9fd6b563e89292c7371767c978e928f33
2015-11-25 11:09:00 +00:00
_bridge = bridge;
_events = [NSMutableDictionary new];
_eventQueue = [NSMutableArray new];
Refactored module access to allow for lazy loading Summary: public The `bridge.modules` dictionary provides access to all native modules, but this API requires that every module is initialized in advance so that any module can be accessed. This diff introduces a better API that will allow modules to be initialized lazily as they are needed, and deprecates `bridge.modules` (modules that use it will still work, but should be rewritten to use `bridge.moduleClasses` or `-[bridge moduleForName/Class:` instead. The rules are now as follows: * Any module that overrides `init` or `setBridge:` will be initialized on the main thread when the bridge is created * Any module that implements `constantsToExport:` will be initialized later when the config is exported (the module itself will be initialized on a background queue, but `constantsToExport:` will still be called on the main thread. * All other modules will be initialized lazily when a method is first called on them. These rules may seem slightly arcane, but they have the advantage of not violating any assumptions that may have been made by existing code - any module written under the original assumption that it would be initialized synchronously on the main thread when the bridge is created should still function exactly the same, but modules that avoid overriding `init` or `setBridge:` will now be loaded lazily. I've rewritten most of the standard modules to take advantage of this new lazy loading, with the following results: Out of the 65 modules included in UIExplorer: * 16 are initialized on the main thread when the bridge is created * A further 8 are initialized when the config is exported to JS * The remaining 41 will be initialized lazily on-demand Reviewed By: jspahrsummers Differential Revision: D2677695 fb-gh-sync-id: 507ae7e9fd6b563e89292c7371767c978e928f33
2015-11-25 11:09:00 +00:00
_eventQueueLock = [NSLock new];
better event emitting II: no deadlocks Summary:D3092867 / 1d3db4c5dc8763d16f2d051fdf04a2976c0fb154 caused deadlock when chrome debugging was turned on, so it was reverted as D3128586 / 144dc3066144a48fc13bb7832abc9e645024fb88. The reason: I was calling `[_bridge dispatchBlock:^{ [self flushEventsQueue]; } queue:RCTJSThread];` from main thread and expecting it will `dispatch_async` to another, since a held lock was being accessed the dispatched block and was released after the dispatch. Turns out `RCTWebSocketExecutor` (which is used when chrome debugger is turned on) executes all blocks dispatched this way to `RCTJSThread` synchronously on the main thread. This resulted in a deadlock. The "dispatched" block was trying to acquired lock which held by the same thread in the dispatching phase. A fix for this is pretty simple. We will release the lock before dispatching the block. However it's not super straightforward to see this won't introduce some race condition in a case with two threads where we would end up with events not being processed. My thinking why that shouldn't happen goes like this: We could get in a bad state if `flushEventsQueue` would run on JS thread while `sendEvent:` is running on MT. (I don't have a specific example how, maybe it's not possible. However when I show this case is safe we know we are good.) The way how locking is setup in this diff the only possible scenario where these two threads would execute in these methods concurrently is JS holding the lock and MT going to enqueue another block on JS thread (since that's outside of "locked" zone). But this scenarion can never happen, since if MT is about to enqueue the block on JS thread it means there cannot be a not yet fully executed block on JS thread. Therefore nothing bad can happen. So this diff brings back the reverted diff and adds to it the fix for the deadlock. Reviewed By: javache Differential Revision: D3130375 fb-gh-sync-id: 885a166f2f808551d7cd4e4eb98634d26afe6a11 fbshipit-source-id: 885a166f2f808551d7cd4e4eb98634d26afe6a11
2016-04-03 06:30:26 +00:00
_eventsDispatchScheduled = NO;
_observers = [NSMutableArray new];
_observersLock = [NSLock new];
}
- (void)sendAppEventWithName:(NSString *)name body:(id)body
{
[_bridge enqueueJSCall:@"RCTNativeAppEventEmitter"
method:@"emit"
args:body ? @[name, body] : @[name]
completion:NULL];
}
- (void)sendDeviceEventWithName:(NSString *)name body:(id)body
{
[_bridge enqueueJSCall:@"RCTDeviceEventEmitter"
method:@"emit"
args:body ? @[name, body] : @[name]
completion:NULL];
}
- (void)sendInputEventWithName:(NSString *)name body:(NSDictionary *)body
{
if (RCT_DEBUG) {
RCTAssert([body[@"target"] isKindOfClass:[NSNumber class]],
@"Event body dictionary must include a 'target' property containing a React tag");
}
name = RCTNormalizeInputEventName(name);
[_bridge enqueueJSCall:@"RCTEventEmitter"
method:@"receiveEvent"
args:body ? @[body[@"target"], name, body] : @[body[@"target"], name]
completion:NULL];
}
- (void)sendTextEventWithType:(RCTTextEventType)type
reactTag:(NSNumber *)reactTag
text:(NSString *)text
key:(NSString *)key
[ReactNative] TextInput bug fixes and features Summary: This introduces event counts to make sure JS doesn't set out of date values on native text inputs, which can cause dropped characters and can mess with autocomplete, and obviates the need for the input buffering which added lag and complexity to the component. Made sure to test simulated super-slow JS text event processing to make sure characters aren't dropped, as well as typing obviously correctable words and making sure autocomplete works as expected. TextInput is now a controlled input by default without causing any issues for most cases, so I removed the `controlled` prop. Fixes selection state jumping by restoring it after setting new text values, so highlighting the middle of some text in the new ReWrite example and hitting space will replace that selection with an underscore and keep the cursor at a sensible position as expected, instead of jumping to the end. Ads `maxLength` prop to support the most commonly needed syncronous behavior: preventing the user from typing too many characters. It can also be used to prevent users from continuing to type after entering special characters by changing it to the current length after a regex match. Made sure to verify it works well with pasted input (including in the middle of existing text), truncating it and collapsing the selection the same way it does on the web. Fixes bug in TextEventsExample where it wouldn't show the submit and end events, even though there were firing correctly.
2015-07-21 19:37:24 +00:00
eventCount:(NSInteger)eventCount
{
static NSString *events[] = {
@"focus",
@"blur",
@"change",
@"submitEditing",
@"endEditing",
@"keyPress"
};
NSMutableDictionary *body = [[NSMutableDictionary alloc] initWithDictionary:@{
[ReactNative] TextInput bug fixes and features Summary: This introduces event counts to make sure JS doesn't set out of date values on native text inputs, which can cause dropped characters and can mess with autocomplete, and obviates the need for the input buffering which added lag and complexity to the component. Made sure to test simulated super-slow JS text event processing to make sure characters aren't dropped, as well as typing obviously correctable words and making sure autocomplete works as expected. TextInput is now a controlled input by default without causing any issues for most cases, so I removed the `controlled` prop. Fixes selection state jumping by restoring it after setting new text values, so highlighting the middle of some text in the new ReWrite example and hitting space will replace that selection with an underscore and keep the cursor at a sensible position as expected, instead of jumping to the end. Ads `maxLength` prop to support the most commonly needed syncronous behavior: preventing the user from typing too many characters. It can also be used to prevent users from continuing to type after entering special characters by changing it to the current length after a regex match. Made sure to verify it works well with pasted input (including in the middle of existing text), truncating it and collapsing the selection the same way it does on the web. Fixes bug in TextEventsExample where it wouldn't show the submit and end events, even though there were firing correctly.
2015-07-21 19:37:24 +00:00
@"eventCount": @(eventCount),
@"target": reactTag
}];
if (text) {
body[@"text"] = text;
}
if (key) {
if (key.length == 0) {
key = @"Backspace"; // backspace
} else {
switch ([key characterAtIndex:0]) {
case '\t':
key = @"Tab";
break;
case '\n':
key = @"Enter";
default:
break;
}
}
body[@"key"] = key;
}
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wdeprecated-declarations"
[self sendInputEventWithName:events[type] body:body];
#pragma clang diagnostic pop
}
- (void)sendEvent:(id<RCTEvent>)event
{
[_observersLock lock];
for (id<RCTEventDispatcherObserver> observer in _observers) {
[observer eventDispatcherWillDispatchEvent:event];
}
[_observersLock unlock];
[_eventQueueLock lock];
NSNumber *eventID = RCTGetEventID(event);
id<RCTEvent> previousEvent = _events[eventID];
if (previousEvent) {
better event emitting II: no deadlocks Summary:D3092867 / 1d3db4c5dc8763d16f2d051fdf04a2976c0fb154 caused deadlock when chrome debugging was turned on, so it was reverted as D3128586 / 144dc3066144a48fc13bb7832abc9e645024fb88. The reason: I was calling `[_bridge dispatchBlock:^{ [self flushEventsQueue]; } queue:RCTJSThread];` from main thread and expecting it will `dispatch_async` to another, since a held lock was being accessed the dispatched block and was released after the dispatch. Turns out `RCTWebSocketExecutor` (which is used when chrome debugger is turned on) executes all blocks dispatched this way to `RCTJSThread` synchronously on the main thread. This resulted in a deadlock. The "dispatched" block was trying to acquired lock which held by the same thread in the dispatching phase. A fix for this is pretty simple. We will release the lock before dispatching the block. However it's not super straightforward to see this won't introduce some race condition in a case with two threads where we would end up with events not being processed. My thinking why that shouldn't happen goes like this: We could get in a bad state if `flushEventsQueue` would run on JS thread while `sendEvent:` is running on MT. (I don't have a specific example how, maybe it's not possible. However when I show this case is safe we know we are good.) The way how locking is setup in this diff the only possible scenario where these two threads would execute in these methods concurrently is JS holding the lock and MT going to enqueue another block on JS thread (since that's outside of "locked" zone). But this scenarion can never happen, since if MT is about to enqueue the block on JS thread it means there cannot be a not yet fully executed block on JS thread. Therefore nothing bad can happen. So this diff brings back the reverted diff and adds to it the fix for the deadlock. Reviewed By: javache Differential Revision: D3130375 fb-gh-sync-id: 885a166f2f808551d7cd4e4eb98634d26afe6a11 fbshipit-source-id: 885a166f2f808551d7cd4e4eb98634d26afe6a11
2016-04-03 06:30:26 +00:00
RCTAssert([event canCoalesce], @"Got event %@ which cannot be coalesced, but has the same eventID %@ as the previous event %@", event, eventID, previousEvent);
event = [previousEvent coalesceWithEvent:event];
} else {
[_eventQueue addObject:eventID];
}
_events[eventID] = event;
better event emitting II: no deadlocks Summary:D3092867 / 1d3db4c5dc8763d16f2d051fdf04a2976c0fb154 caused deadlock when chrome debugging was turned on, so it was reverted as D3128586 / 144dc3066144a48fc13bb7832abc9e645024fb88. The reason: I was calling `[_bridge dispatchBlock:^{ [self flushEventsQueue]; } queue:RCTJSThread];` from main thread and expecting it will `dispatch_async` to another, since a held lock was being accessed the dispatched block and was released after the dispatch. Turns out `RCTWebSocketExecutor` (which is used when chrome debugger is turned on) executes all blocks dispatched this way to `RCTJSThread` synchronously on the main thread. This resulted in a deadlock. The "dispatched" block was trying to acquired lock which held by the same thread in the dispatching phase. A fix for this is pretty simple. We will release the lock before dispatching the block. However it's not super straightforward to see this won't introduce some race condition in a case with two threads where we would end up with events not being processed. My thinking why that shouldn't happen goes like this: We could get in a bad state if `flushEventsQueue` would run on JS thread while `sendEvent:` is running on MT. (I don't have a specific example how, maybe it's not possible. However when I show this case is safe we know we are good.) The way how locking is setup in this diff the only possible scenario where these two threads would execute in these methods concurrently is JS holding the lock and MT going to enqueue another block on JS thread (since that's outside of "locked" zone). But this scenarion can never happen, since if MT is about to enqueue the block on JS thread it means there cannot be a not yet fully executed block on JS thread. Therefore nothing bad can happen. So this diff brings back the reverted diff and adds to it the fix for the deadlock. Reviewed By: javache Differential Revision: D3130375 fb-gh-sync-id: 885a166f2f808551d7cd4e4eb98634d26afe6a11 fbshipit-source-id: 885a166f2f808551d7cd4e4eb98634d26afe6a11
2016-04-03 06:30:26 +00:00
BOOL scheduleEventsDispatch = NO;
if (!_eventsDispatchScheduled) {
_eventsDispatchScheduled = YES;
scheduleEventsDispatch = YES;
}
// We have to release the lock before dispatching block with events,
// since dispatchBlock: can be executed synchronously on the same queue.
// (This is happening when chrome debugging is turned on.)
[_eventQueueLock unlock];
better event emitting II: no deadlocks Summary:D3092867 / 1d3db4c5dc8763d16f2d051fdf04a2976c0fb154 caused deadlock when chrome debugging was turned on, so it was reverted as D3128586 / 144dc3066144a48fc13bb7832abc9e645024fb88. The reason: I was calling `[_bridge dispatchBlock:^{ [self flushEventsQueue]; } queue:RCTJSThread];` from main thread and expecting it will `dispatch_async` to another, since a held lock was being accessed the dispatched block and was released after the dispatch. Turns out `RCTWebSocketExecutor` (which is used when chrome debugger is turned on) executes all blocks dispatched this way to `RCTJSThread` synchronously on the main thread. This resulted in a deadlock. The "dispatched" block was trying to acquired lock which held by the same thread in the dispatching phase. A fix for this is pretty simple. We will release the lock before dispatching the block. However it's not super straightforward to see this won't introduce some race condition in a case with two threads where we would end up with events not being processed. My thinking why that shouldn't happen goes like this: We could get in a bad state if `flushEventsQueue` would run on JS thread while `sendEvent:` is running on MT. (I don't have a specific example how, maybe it's not possible. However when I show this case is safe we know we are good.) The way how locking is setup in this diff the only possible scenario where these two threads would execute in these methods concurrently is JS holding the lock and MT going to enqueue another block on JS thread (since that's outside of "locked" zone). But this scenarion can never happen, since if MT is about to enqueue the block on JS thread it means there cannot be a not yet fully executed block on JS thread. Therefore nothing bad can happen. So this diff brings back the reverted diff and adds to it the fix for the deadlock. Reviewed By: javache Differential Revision: D3130375 fb-gh-sync-id: 885a166f2f808551d7cd4e4eb98634d26afe6a11 fbshipit-source-id: 885a166f2f808551d7cd4e4eb98634d26afe6a11
2016-04-03 06:30:26 +00:00
if (scheduleEventsDispatch) {
[_bridge dispatchBlock:^{
[self flushEventsQueue];
} queue:RCTJSThread];
}
}
- (void)addDispatchObserver:(id<RCTEventDispatcherObserver>)observer
{
[_observersLock lock];
[_observers addObject:observer];
[_observersLock unlock];
}
- (void)removeDispatchObserver:(id<RCTEventDispatcherObserver>)observer
{
[_observersLock lock];
[_observers removeObject:observer];
[_observersLock unlock];
}
- (void)dispatchEvent:(id<RCTEvent>)event
{
[_bridge enqueueJSCall:[[event class] moduleDotMethod] args:[event arguments]];
}
- (dispatch_queue_t)methodQueue
{
return RCTJSThread;
}
better event emitting II: no deadlocks Summary:D3092867 / 1d3db4c5dc8763d16f2d051fdf04a2976c0fb154 caused deadlock when chrome debugging was turned on, so it was reverted as D3128586 / 144dc3066144a48fc13bb7832abc9e645024fb88. The reason: I was calling `[_bridge dispatchBlock:^{ [self flushEventsQueue]; } queue:RCTJSThread];` from main thread and expecting it will `dispatch_async` to another, since a held lock was being accessed the dispatched block and was released after the dispatch. Turns out `RCTWebSocketExecutor` (which is used when chrome debugger is turned on) executes all blocks dispatched this way to `RCTJSThread` synchronously on the main thread. This resulted in a deadlock. The "dispatched" block was trying to acquired lock which held by the same thread in the dispatching phase. A fix for this is pretty simple. We will release the lock before dispatching the block. However it's not super straightforward to see this won't introduce some race condition in a case with two threads where we would end up with events not being processed. My thinking why that shouldn't happen goes like this: We could get in a bad state if `flushEventsQueue` would run on JS thread while `sendEvent:` is running on MT. (I don't have a specific example how, maybe it's not possible. However when I show this case is safe we know we are good.) The way how locking is setup in this diff the only possible scenario where these two threads would execute in these methods concurrently is JS holding the lock and MT going to enqueue another block on JS thread (since that's outside of "locked" zone). But this scenarion can never happen, since if MT is about to enqueue the block on JS thread it means there cannot be a not yet fully executed block on JS thread. Therefore nothing bad can happen. So this diff brings back the reverted diff and adds to it the fix for the deadlock. Reviewed By: javache Differential Revision: D3130375 fb-gh-sync-id: 885a166f2f808551d7cd4e4eb98634d26afe6a11 fbshipit-source-id: 885a166f2f808551d7cd4e4eb98634d26afe6a11
2016-04-03 06:30:26 +00:00
// js thread only (which suprisingly can be the main thread, depends on used JS executor)
- (void)flushEventsQueue
{
[_eventQueueLock lock];
NSDictionary *events = _events;
_events = [NSMutableDictionary new];
NSMutableArray *eventQueue = _eventQueue;
_eventQueue = [NSMutableArray new];
better event emitting II: no deadlocks Summary:D3092867 / 1d3db4c5dc8763d16f2d051fdf04a2976c0fb154 caused deadlock when chrome debugging was turned on, so it was reverted as D3128586 / 144dc3066144a48fc13bb7832abc9e645024fb88. The reason: I was calling `[_bridge dispatchBlock:^{ [self flushEventsQueue]; } queue:RCTJSThread];` from main thread and expecting it will `dispatch_async` to another, since a held lock was being accessed the dispatched block and was released after the dispatch. Turns out `RCTWebSocketExecutor` (which is used when chrome debugger is turned on) executes all blocks dispatched this way to `RCTJSThread` synchronously on the main thread. This resulted in a deadlock. The "dispatched" block was trying to acquired lock which held by the same thread in the dispatching phase. A fix for this is pretty simple. We will release the lock before dispatching the block. However it's not super straightforward to see this won't introduce some race condition in a case with two threads where we would end up with events not being processed. My thinking why that shouldn't happen goes like this: We could get in a bad state if `flushEventsQueue` would run on JS thread while `sendEvent:` is running on MT. (I don't have a specific example how, maybe it's not possible. However when I show this case is safe we know we are good.) The way how locking is setup in this diff the only possible scenario where these two threads would execute in these methods concurrently is JS holding the lock and MT going to enqueue another block on JS thread (since that's outside of "locked" zone). But this scenarion can never happen, since if MT is about to enqueue the block on JS thread it means there cannot be a not yet fully executed block on JS thread. Therefore nothing bad can happen. So this diff brings back the reverted diff and adds to it the fix for the deadlock. Reviewed By: javache Differential Revision: D3130375 fb-gh-sync-id: 885a166f2f808551d7cd4e4eb98634d26afe6a11 fbshipit-source-id: 885a166f2f808551d7cd4e4eb98634d26afe6a11
2016-04-03 06:30:26 +00:00
_eventsDispatchScheduled = NO;
[_eventQueueLock unlock];
for (NSNumber *eventId in eventQueue) {
[self dispatchEvent:events[eventId]];
}
}
@end
@implementation RCTBridge (RCTEventDispatcher)
- (RCTEventDispatcher *)eventDispatcher
{
return [self moduleForClass:[RCTEventDispatcher class]];
}
@end