Project: /_project.yaml Book: /_book.yaml
{% include “_versions.html” %}
Android {{ androidPVersionNumber }} introduces API support for multi-camera devices via a new logical camera device composed of two or more physical camera devices pointing in the same direction. The logical camera device is exposed as a single CameraDevice/CaptureSession to an application allowing for interaction with HAL-integrated multi-camera features. Applications can optionally access and control underlying physical camera streams, metadata, and controls.
Figure 1. Multi-camera support
In this diagram, different camera IDs are color coded. The application can stream raw buffers from each physical camera at the same time. It is also possible to set separate controls and receive separate metadata from different physical cameras.
Multi-camera devices must be advertised via the logical multi-camera capability{: .external}.
Camera clients can query the camera ID of the physical devices a particular logical camera is made of by calling getPhysicalCameraIds()
{: .external}. The IDs returned as part of the result are then used to control physical devices individually via setPhysicalCameraId()
{: .external}. The results from such individual requests can be queried from the complete result by invoking getPhysicalCameraResults()
{: .external}.
Individual physical camera requests may support only a limited subset of parameters. To receive a list of the supported parameters, developers can call getAvailablePhysicalCameraRequestKeys()
{: .external}.
Physical camera streams are supported only for non-reprocessing requests and only for monochrome and bayer sensors.
To add logical multi-camera devices on the HAL side:
Add a ANDROID_REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA
{: .external} capability for any logical camera device backed by two or more physical cameras that are also exposed to an application.
Populate the static ANDROID_LOGICAL_MULTI_CAMERA_PHYSICAL_IDS
{: .external} metadata field with a list of physical camera IDs.
Populate the depth-related static metadata required to correlate between physical camera streams' pixels: ANDROID_LENS_POSE_ROTATION
{: .external}, ANDROID_LENS_POSE_TRANSLATION
{: .external}, ANDROID_LENS_INTRINSIC_CALIBRATION
{: .external}, ANDROID_LENS_RADIAL_DISTORTION
{: .external}, ANDROID_LENS_POSE_REFERENCE
{: .external}.
Set the static ANDROID_LOGICAL_MULTI_CAMERA_SENSOR_SYNC_TYPE
{: .external} metadata field to:
ANDROID_LOGICAL_MULTI_CAMERA_SENSOR_SYNC_TYPE_APPROXIMATE
{: .external}: For sensors in master-master mode, no hardware shutter/exposure sync.ANDROID_LOGICAL_MULTI_CAMERA_SENSOR_SYNC_TYPE_CALIBRATED
{: .external}: For sensors in master-slave mode, hardware shutter/exposure sync.Populate ANDROID_REQUEST_AVAILABLE_PHYSICAL_CAMERA_REQUEST_KEYS
{: .external} with a list of supported parameters for individual physical cameras. The list can be empty if the logical device doesn't support individual requests.
If individual requests are supported, process and apply the individual physicalCameraSettings
{: .external} that can arrive as part of capture requests and append the individual physicalCameraMetadata
{: .external} accordingly.
The camera device must support replacing one logical YUV/RAW stream with physical streams of the same size (RAW size is an exception) and format from two physical cameras.
For a logical camera, the mandatory stream combinations for the camera device of a certain hardware level is the same as what's required in [CameraDevice.createCaptureSession
](https://developer.android.com/reference/android/hardware/camera2/CameraDevice.html#createCaptureSession(java.util.List<android.view.Surface>, android.hardware.camera2.CameraCaptureSession.StateCallback, android.os.Handler)){: .external}. All the streams in the stream configuration map should be fused/logical frames.
If certain stream combinations cannot be fused, they should not be included in the logical camera's stream configuration map. Instead the application can look up the stream configuration map of the individual physical camera and configure the stream using the physical camera ID.
This means that the logical camera's hardware level may be lower than that of individual cameras. One such example is when the two physical cameras have different raw sizes. The logical camera does not have RAW capability, so it cannot be a LEVEL_3 device, but the individual physical cameras can be LEVEL_3 devices.
For both the logical camera and the underlying physical cameras, the directly configured processed streams, RAW streams, and stall streams should not exceed the predefined android.request.maxNumOutputStreams
.
Both the logical camera and its underlying physical cameras must guarantee the mandatory stream combinations{: .external} required for their device levels.
A logical camera device should operate in the same way as a physical camera device based on its hardware level and capabilities. It's recommended that its feature set is a superset of that of individual physical cameras.
Additionally, for each guaranteed stream combination, the logical camera must support:
Replacing one logical YUV_420_888 or raw stream with two physical streams of the same size and format, each from a separate physical camera, given that the size and format are supported by the physical cameras.
Adding two raw streams, one from each physical camera, if the logical camera doesn't advertise RAW capability, but the underlying physical cameras do. This usually occurs when the physical cameras have different sensor sizes.
Using physical streams in place of a logical stream of the same size and format must not slow down the frame rate of the capture, as long as the minimum frame duration of the physical and logical streams are the same.
Performance:
Power:
You can customize your device implementation in the following ways.
Logical multi-camera devices must pass Camera CTS like any other regular camera. The test cases that target this type of device can be found in the LogicalCameraDeviceTest
{: .external} module.
These three ITS tests target multi-camera systems to facilitate the proper fusing of images:
scene1/test_multi_camera_match.py
{: .external}scene4/test_multi_camera_alignment.py
{: .external}sensor_fusion/test_multi_camera_frame_sync.py
{: .external}The scene1 and scene4 tests run with the ITS-in-a-box test rig. The test_multi_camera_match
test asserts that the brightness of the center of the images match when the two cameras are both enabled. The test_multi_camera_alignment
test asserts that camera spacings, orientations, and distortion parameters are properly loaded. If the multi-camera system includes a Wide FoV camera (>90o), the rev2 version of the ITS box is required.
Sensor_fusion
is a second test rig that enables repeated, prescribed phone motion and asserts that the gyroscope and image sensor timestamps match and that the multi-camera frames are in sync.
All boxes are available through AcuSpec, Inc. (www.acuspecinc.com{: .external}, [email protected]) and MYWAY Manufacturing (www.myway.tw{: .external}, [email protected]). Additionally, the rev1 ITS box can be purchased through West-Mark (www.west-mark.com{: .external}, [email protected]).