Deep Fusion Coming To 2019 iPhones Via A Future iOS 13 Beta Update

Deep Fusion, an exclusive feature for the 2019 iPhones shown at Apple's fall event last month, appears to be coming to a future iOS 13 beta, according to a report from The Verge. Apple is believed to be launching the new iOS 13 beta to developers shortly, but what beta version is uncertain.


Deep Fusion is a raw way of taking photos, employing A13 chip's powerful machine learning capabilities to generate throughput images. The result is a photo with incredible details, a broad variety of dynamics and low noise. Using machine learning, this feature performs best in low to medium light.

This feature is designed to enhance indoor pictures and pictures taken under medium lighting conditions and will automatically activate this feature depending on the room's lens and light level. By default, Smart HDR is used in bright scenes by the wide-angle lens, while Night Mode will be enabled in darker scenes.

Unlike the Bight mode, which will be visible on the Camera app and can be turned on and off manually, while Deep Fusion works silently on the back, without the relevant settings interface or buttons. Apple said this is deliberate because it doesn't want users to worry too much about taking nice pictures, instead, all of which will be processed automatically.

Deep Fusion operates in a quite different manner from Smart HDR, which works as follows:

  1. By the time you press the shutter button, the camera has already grabbed four frames at a fast shutter speed to freeze motion in the shot and four standard frames. When you press the shutter it grabs one longer-exposure shot to capture detail.
  2. Those three regular shots and long-exposure shots are merged into what Apple calls a “synthetic long.” This is a major difference from Smart HDR.
  3. Deep Fusion picks the short-exposure image with the most detail and merges it with the synthetic long exposure. Unlike Smart HDR, Deep Fusion merges these two frames, not more — although the synthetic long is already made of four previously-merged frames. All the component frames are also processed for noise differently than Smart HDR, in a way that’s better for Deep Fusion.
  4. The images are run through four detail processing steps, pixel by pixel, each tailored to increasing amounts of detail — the sky and walls are in the lowest band, while skin, hair, fabrics, and so on are the highest level. This generates a series of weightings for how to blend the two images — taking detail from one and tone, color, and luminance from the other.
  5. The final image is generated.

Compared to normal Smart HDR mode, Deep Fusion takes more time throughout the shooting – just over a second, which means Deep Fusion can't work in an emergency situation.

Image Via SlashGear

Post a Comment

أحدث أقدم