Update [02-10-19 @8pm IST]: Apple finally rolled out Deep Fusion Technology with its latest iOS13.2 although the reports said otherwise.
News reports claiming that Apple’s Deep Fusion Technology that was teased at the Apple Launch Event to arrive today, however, it seems like exciting users might have to wait for some more time. TechCrunch noted that the feature that was a part of iOS12.3 beta is not arriving today but reserved for a later date but the technology is worth waiting for.
According to Apple, it’s Deep Fusion photography technology uses several images that it has captured before, during, and after the shutter is closed and uses the best of elements at the pixel-level to create the most detailed, highly crisped and detailed images. Available for Apple iPhone 11 Pro and iPhone 11 Max Pro, the technology will need Apple A13 chipset which eliminates the chances of ever trickling down to previous iPhone iterations. It uses not one but wide-angle and telephoto sensors to create mind-blowing 12MP photos with par clarity and detail and everything works under the hood which means most of the time, you wouldn’t even know the difference it made to the image.
Apple made aggressive progress to its camera technology where the sensors and lenses are combined with machine learning software and specialized hardware that enables the device to capture detailed images that would outsmart HDR imaging for sure in some cases if not all. Before the technology came into existence, Apple iPhone 7 and above have been using images from two sensors to get the best possible images but with the Deep Fusion technology, although it will still take the output from both sensors except the ultra-wide-angle, it will use aggressive machine learning techniques to create the best of images.
Most changes that people will notice are among high-frequency subjects like clothing or skin textures which hard complicated for cameras but that is something this technology will overcome and the entire process will take just a second to complete.
How does Apple Deep Fusion Technology work?
Caution because you are about to read some technical stuff here.
About the technology, the device captures images via both telephoto and wide-angle sensors. Here, the device captures an image with a negative EV which is a darker image similar to how you would decrease the contrast manually to click a darker image. Once done, it uses the image to map out sharpness of objects and then clicks three images or frames with EV0 and finally, captures an image with EV+ with long exposure and uses all these images to combine it in a literal pixel-level that enables it to observe the best elements to be used for the finished image. It uses four neural network models to map out which elements and pixels where it belongs in the image frequency spectrum in order to output a finished and highly detailed image using Deep Fusion technology.
You can bypass using the technology simply by using the ultra-wide-angle camera which isn’t included in or clicks to capture an image and tap on the preview that shouldn’t let the process reach entirety.
Deep Fusion technology arrives on iOS 13.2!
Check out the 2019 iPhone 11 series on Amazon right here!