The dynamic range of a scene in the real world exceeds camera ability to perceive due to photon response limitation of CMOS sensor. The most common solution is to use multi-exposures to capture different scene dynamic ranges and stitch them into a wider dynamic range.
In camera solutions, 4-exposure 24-bit High Dynamic Range (HDR) theoretically is more expensive and better than 3-exposure 20-bit HDR. However, the processed result of 4-exposure HDR seems worse than 3-exposure HDR on tone reproduction if sensor configuration and ISP tuning are not managed correctly.
This article describes:
Figure 1 shows how 3-exposure HDR and 4-exposure HDR content are combined in the sensor side:
These exposures are normalized to 20-bit or 24-bit, depending on the supported bit depth of 3-exposure or 4-exposure.
Figure 1: 4-exposure HDR data intensity response and linearization
Note: The most advanced sensors use different architectures to control the sensitivity of the 4 exposures: dual conversion gain and/or split pixel. In this generalized example, we use names like Long and Short to explain the relative sensitivity (exposure ratio), but the integration time is not the only factor.
Mali ISPs, such as Mali-C71 and Mali-C78, have 24-bit data width to process the imaging data with MSB alignment:
As a result, when you have same long exposures, 3-exposure 20-bit HDR mode looks 16x brighter than 4-exposure 24-bit HDR mode. Figure 2 and Figure 3 show such brightness difference between a 4-exposure 24-bit image and a 3 exposure 20-bit image.
Figure 2: 4-exposure 24-bit image (no gamma encoded)
Figure 3: 3-exposure 20-bit image (no gamma encoded)
However, the Signal Noise Ratios (SNR) for the dark pixels are same for both 3-exposures 20-bit and 4-exposures 24-bit because of the same long exposures. Applying the digital gains to the dark pixels for 4-exposures does not change the SNR.
In the ISP, there are two kinds of digital gains:
Digital gains are applied globally to the whole image that auto-exposure strategy should control these gains carefully to avoid damaging the bright pixels by clipping. In LDR sceneries, applying a lot of digital gains would not be a problem, while in HDR sceneries, applying digital gains would probably break the highlights.
Iridix gains are applied locally and adaptively to the image, based on the image content. Generally, more gains for dark pixels and less gains for bright pixels are desired for reproducing an HDR image.
This section describes how to tune 24-bit HDR data to align the brightness with 20-bit HDR data without breaking the highlights.
Figure 4 shows typical main blocks to process HDR data in the Mali ISPs:
Figure 4: typical ISP main blocks to process HDR data
To tune iridix by compressing the dynamic range from 24-bit to 14bit, you can apply a much steeper Asymmetry Curve to amplify the dark regions while we may use moderate asymmetry curve for 20-bit HDR data and increase the dark_enh setting to amplify the dark region in further.
The following table shows the different iridix effect under different combination of asymmetry curve, dark_enh and strength_inroi with 20-bit and 24-bit image data.
Figure 5: 3-exposure 20-bit iridix processed result with default asymmetry curve
Figure 6: 4-exposure 24-bit iridix processed result with default asymmetry curve
Figure 7: 4-exposure 24-bit iridix processed result with steeper asymmetry curve
The Mali ISP can maximize the 24-bit HDR content reproduction with the use of iridix local tone mapping. After the tunning, 24-bit solution has better highlights preservation and the same level dark intensities perception as 20-bit solution.
Thank you for sharing this. You set 8.75x digital gain to the 24-bit iamge. Here meas digital gain or Iridix gain?
Digital gain(global)
Would it make bright area over exposure when using digital gain?I am thinking of using the GTM funciton of Iridix.Do you have any recommanded LUT for GTM that could share? Thanks
Not really, digital gain is controlled by AE, in this example, digital gain 8.75x doesn't break the highlights.
GTM or LTM is dependent on use-case generally, for human vision, LTM is recommended. For computer vision, or some specific case, GTM could be selected.
Thank you very much. Jiang.
I believe ADAS is one region of computer vision. Do you think we should use GTM on ADAS solution?
And the reason to use GTM rather than LTM. It's GTM can provide a more stable image for algorithm. Is it? How do you think?