(android): Android tap-to-focus and improved (continuous) auto-focus (#575)

* Android tap-to-focus and improved (continuous) auto-focus

Tap-to-focus

- On tap, compute focus area around motion event's location, and pass this to the camera parameters
  as the new focus area.
- Adds RCTCameraUtils.java file, so far with only a single function that helps compute the focus area
  from a motion event. This file can serve as a location for utility constants and functions for the
  rest of the app, where such things can be extracted out.

Improved (continuous) auto-focus

- Use FOCUS_MODE_CONTINUOUS_PICTURE/VIDEO when possible to enable continuous auto-focus; fall back to
  FOCUS_MODE_AUTO otherwise, if able.

Other changes

- Update README to specify differences between iOS and Android for focus and zoom functionality.
- Update AndroidManifest with more thorough list of permissions and features.
- Update Example package.json react and react-native dependencies to match root package's package.json.

* Example: default empty onFocusChanged callback

- Enables default tap-to-focus behavior in Example app, facilitating
  testing of focus features in the Example app
This commit is contained in:
Abe Botros 2017-04-24 09:25:49 -07:00 committed by Zack Story
parent 20b0721850
commit ae9eab3533
5 changed files with 181 additions and 30 deletions

View File

@ -10,6 +10,15 @@
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.RECORD_AUDIO"/>
<uses-permission android:name="android.permission.RECORD_VIDEO"/>
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-feature android:name="android.hardware.camera" android:required="false" />
<uses-feature android:name="android.hardware.camera.autofocus" android:required="false" />
<uses-sdk
android:minSdkVersion="16"
android:targetSdkVersion="22" />

View File

@ -6,8 +6,8 @@
"start": "node node_modules/react-native/local-cli/cli.js start"
},
"dependencies": {
"react": "~15.3.0",
"react-native": "^0.34.0",
"react": ">=15.4.0",
"react-native": ">=0.40",
"react-native-camera": "file:../"
}
}

View File

@ -260,12 +260,16 @@ Values:
Use the `torchMode` property to specify the camera torch mode.
#### `onFocusChanged: Event { nativeEvent: { touchPoint: { x, y } }`
#### `iOS` `onFocusChanged: Event { nativeEvent: { touchPoint: { x, y } }`
Called when a touch focus gesture has been made.
iOS: Called when a touch focus gesture has been made.
By default, `onFocusChanged` is not defined and tap-to-focus is disabled.
#### `defaultOnFocusComponent`
Android: This callback is not yet implemented. However, Android will
automatically do tap-to-focus if the device supports auto-focus; there is
currently no way to manage this from javascript.
#### `iOS` `defaultOnFocusComponent`
Values:
`true` (default)
@ -273,11 +277,15 @@ Values:
If `defaultOnFocusComponent` set to false, default internal implementation of visual feedback for tap-to-focus gesture will be disabled.
#### `onZoomChanged: Event { nativeEvent: { velocity, zoomFactor } }`
#### `iOS` `onZoomChanged: Event { nativeEvent: { velocity, zoomFactor } }`
Called when focus has changed.
iOS: Called when focus has changed.
By default, `onZoomChanged` is not defined and pinch-to-zoom is disabled.
Android: This callback is not yet implemented. However, Android will
automatically handle pinch-to-zoom; there is currently no way to manage this
from javascript.
#### `iOS` `keepAwake`
If set to `true`, the device will not sleep while the camera preview is visible. This mimics the behavior of the default camera app, which keeps the device awake while open.

View File

@ -0,0 +1,72 @@
package com.lwansbrough.RCTCamera;
import android.graphics.Rect;
import android.graphics.RectF;
import android.hardware.Camera;
import android.view.MotionEvent;
public class RCTCameraUtils {
private static final int FOCUS_AREA_MOTION_EVENT_EDGE_LENGTH = 100;
private static final int FOCUS_AREA_WEIGHT = 1000;
/**
* Computes a Camera.Area corresponding to the new focus area to focus the camera on. This is
* done by deriving a square around the center of a MotionEvent pointer (with side length equal
* to FOCUS_AREA_MOTION_EVENT_EDGE_LENGTH), then transforming this rectangle's/square's
* coordinates into the (-1000, 1000) coordinate system used for camera focus areas.
*
* Also note that we operate on RectF instances for the most part, to avoid any integer
* division rounding errors going forward. We only round at the very end for playing into
* the final focus areas list.
*
* @throws RuntimeException if unable to compute valid intersection between MotionEvent region
* and SurfaceTexture region.
*/
protected static Camera.Area computeFocusAreaFromMotionEvent(final MotionEvent event, final int surfaceTextureWidth, final int surfaceTextureHeight) {
// Get position of first touch pointer.
final int pointerId = event.getPointerId(0);
final int pointerIndex = event.findPointerIndex(pointerId);
final float centerX = event.getX(pointerIndex);
final float centerY = event.getY(pointerIndex);
// Build event rect. Note that coordinates increase right and down, such that left <= right
// and top <= bottom.
final RectF eventRect = new RectF(
centerX - FOCUS_AREA_MOTION_EVENT_EDGE_LENGTH, // left
centerY - FOCUS_AREA_MOTION_EVENT_EDGE_LENGTH, // top
centerX + FOCUS_AREA_MOTION_EVENT_EDGE_LENGTH, // right
centerY + FOCUS_AREA_MOTION_EVENT_EDGE_LENGTH // bottom
);
// Intersect this rect with the rect corresponding to the full area of the parent surface
// texture, making sure we are not placing any amount of the eventRect outside the parent
// surface's area.
final RectF surfaceTextureRect = new RectF(
(float) 0, // left
(float) 0, // top
(float) surfaceTextureWidth, // right
(float) surfaceTextureHeight // bottom
);
final boolean intersectSuccess = eventRect.intersect(surfaceTextureRect);
if (!intersectSuccess) {
throw new RuntimeException(
"MotionEvent rect does not intersect with SurfaceTexture rect; unable to " +
"compute focus area"
);
}
// Transform into (-1000, 1000) focus area coordinate system. See
// https://developer.android.com/reference/android/hardware/Camera.Area.html.
// Note that if this is ever changed to a Rect instead of RectF, be cautious of integer
// division rounding!
final RectF focusAreaRect = new RectF(
(eventRect.left / surfaceTextureWidth) * 2000 - 1000, // left
(eventRect.top / surfaceTextureHeight) * 2000 - 1000, // top
(eventRect.right / surfaceTextureWidth) * 2000 - 1000, // right
(eventRect.bottom / surfaceTextureHeight) * 2000 - 1000 // bottom
);
Rect focusAreaRectRounded = new Rect();
focusAreaRect.round(focusAreaRectRounded);
return new Camera.Area(focusAreaRectRounded, FOCUS_AREA_WEIGHT);
}
}

View File

@ -5,6 +5,7 @@
package com.lwansbrough.RCTCamera;
import android.content.Context;
import android.graphics.Rect;
import android.graphics.SurfaceTexture;
import android.hardware.Camera;
import android.view.MotionEvent;
@ -16,6 +17,7 @@ import com.facebook.react.bridge.ReactContext;
import com.facebook.react.bridge.WritableMap;
import com.facebook.react.modules.core.DeviceEventManagerModule;
import java.util.ArrayList;
import java.util.List;
import java.util.EnumMap;
import java.util.EnumSet;
@ -32,6 +34,8 @@ class RCTCameraViewFinder extends TextureView implements TextureView.SurfaceText
private int _cameraType;
private int _captureMode;
private SurfaceTexture _surfaceTexture;
private int _surfaceTextureWidth;
private int _surfaceTextureHeight;
private boolean _isStarting;
private boolean _isStopping;
private Camera _camera;
@ -53,16 +57,22 @@ class RCTCameraViewFinder extends TextureView implements TextureView.SurfaceText
@Override
public void onSurfaceTextureAvailable(SurfaceTexture surface, int width, int height) {
_surfaceTexture = surface;
_surfaceTextureWidth = width;
_surfaceTextureHeight = height;
startCamera();
}
@Override
public void onSurfaceTextureSizeChanged(SurfaceTexture surface, int width, int height) {
_surfaceTextureWidth = width;
_surfaceTextureHeight = height;
}
@Override
public boolean onSurfaceTextureDestroyed(SurfaceTexture surface) {
_surfaceTexture = null;
_surfaceTextureWidth = 0;
_surfaceTextureHeight = 0;
stopCamera();
return true;
}
@ -126,17 +136,30 @@ class RCTCameraViewFinder extends TextureView implements TextureView.SurfaceText
try {
_camera = RCTCamera.getInstance().acquireCameraInstance(_cameraType);
Camera.Parameters parameters = _camera.getParameters();
// set autofocus
List<String> focusModes = parameters.getSupportedFocusModes();
if (focusModes.contains(Camera.Parameters.FOCUS_MODE_CONTINUOUS_PICTURE)) {
parameters.setFocusMode(Camera.Parameters.FOCUS_MODE_CONTINUOUS_PICTURE);
final boolean isCaptureModeStill = (_captureMode == RCTCameraModule.RCT_CAMERA_CAPTURE_MODE_STILL);
final boolean isCaptureModeVideo = (_captureMode == RCTCameraModule.RCT_CAMERA_CAPTURE_MODE_VIDEO);
if (!isCaptureModeStill && !isCaptureModeVideo) {
throw new RuntimeException("Unsupported capture mode:" + _captureMode);
}
// Set auto-focus. Try to set to continuous picture/video, and fall back to general
// auto if available.
List<String> focusModes = parameters.getSupportedFocusModes();
if (isCaptureModeStill && focusModes.contains(Camera.Parameters.FOCUS_MODE_CONTINUOUS_PICTURE)) {
parameters.setFocusMode(Camera.Parameters.FOCUS_MODE_CONTINUOUS_PICTURE);
} else if (isCaptureModeVideo && focusModes.contains(Camera.Parameters.FOCUS_MODE_CONTINUOUS_VIDEO)) {
parameters.setFocusMode(Camera.Parameters.FOCUS_MODE_CONTINUOUS_VIDEO);
} else if (focusModes.contains(Camera.Parameters.FOCUS_MODE_AUTO)) {
parameters.setFocusMode(Camera.Parameters.FOCUS_MODE_AUTO);
}
// set picture size
// defaults to max available size
List<Camera.Size> supportedSizes;
if (_captureMode == RCTCameraModule.RCT_CAMERA_CAPTURE_MODE_STILL) {
if (isCaptureModeStill) {
supportedSizes = parameters.getSupportedPictureSizes();
} else if (_captureMode == RCTCameraModule.RCT_CAMERA_CAPTURE_MODE_VIDEO) {
} else if (isCaptureModeVideo) {
supportedSizes = RCTCamera.getInstance().getSupportedVideoSizes(_camera);
} else {
throw new RuntimeException("Unsupported capture mode:" + _captureMode);
@ -288,15 +311,15 @@ class RCTCameraViewFinder extends TextureView implements TextureView.SurfaceText
// rotate for zxing if orientation is portrait
if (RCTCamera.getInstance().getActualDeviceOrientation() == 0) {
byte[] rotated = new byte[imageData.length];
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
rotated[x * height + height - y - 1] = imageData[x + y * width];
byte[] rotated = new byte[imageData.length];
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
rotated[x * height + height - y - 1] = imageData[x + y * width];
}
}
}
width = size.height;
height = size.width;
imageData = rotated;
width = size.height;
height = size.width;
imageData = rotated;
}
try {
@ -362,24 +385,63 @@ class RCTCameraViewFinder extends TextureView implements TextureView.SurfaceText
_camera.setParameters(params);
}
/**
* Handles setting focus to the location of the event.
*
* Note that this will override the focus mode on the camera to FOCUS_MODE_AUTO if available,
* even if this was previously something else (such as FOCUS_MODE_CONTINUOUS_*; see also
* {@link #startCamera()}. However, this makes sense - after the user has initiated any
* specific focus intent, we shouldn't be refocusing and overriding their request!
*/
public void handleFocus(MotionEvent event, Camera.Parameters params) {
int pointerId = event.getPointerId(0);
int pointerIndex = event.findPointerIndex(pointerId);
// Get the pointer's current position
float x = event.getX(pointerIndex);
float y = event.getY(pointerIndex);
List<String> supportedFocusModes = params.getSupportedFocusModes();
if (supportedFocusModes != null && supportedFocusModes.contains(Camera.Parameters.FOCUS_MODE_AUTO)) {
// Ensure focus areas are enabled. If max num focus areas is 0, then focus area is not
// supported, so we cannot do anything here.
if (params.getMaxNumFocusAreas() == 0) {
return;
}
// Cancel any previous focus actions.
_camera.cancelAutoFocus();
// Compute focus area rect.
Camera.Area focusAreaFromMotionEvent;
try {
focusAreaFromMotionEvent = RCTCameraUtils.computeFocusAreaFromMotionEvent(event, _surfaceTextureWidth, _surfaceTextureHeight);
} catch (final RuntimeException e) {
e.printStackTrace();
return;
}
// Set focus mode to auto.
params.setFocusMode(Camera.Parameters.FOCUS_MODE_AUTO);
// Set focus area.
final ArrayList<Camera.Area> focusAreas = new ArrayList<Camera.Area>();
focusAreas.add(focusAreaFromMotionEvent);
params.setFocusAreas(focusAreas);
// Also set metering area if enabled. If max num metering areas is 0, then metering area
// is not supported. We can usually safely omit this anyway, though.
if (params.getMaxNumMeteringAreas() > 0) {
params.setMeteringAreas(focusAreas);
}
// Set parameters before starting auto-focus.
_camera.setParameters(params);
// Start auto-focus now that focus area has been set. If successful, then can cancel
// it afterwards. Wrap in try-catch to avoid crashing on merely autoFocus fails.
try {
_camera.autoFocus(new Camera.AutoFocusCallback() {
@Override
public void onAutoFocus(boolean b, Camera camera) {
// currently set to auto-focus on single touch
public void onAutoFocus(boolean success, Camera camera) {
if (success) {
camera.cancelAutoFocus();
}
}
});
} catch (Exception e) {
// just print stack trace, we don't want to crash by autoFocus fails
e.printStackTrace();
}
}