The iPhone 11 Pro now has three lenses on the back. The company also claims this to be the first time a neural engine is “responsible for generating the output image." Apple “This kind of image would not have been possible before,” said Schiller. This is why Apple’s example image featured a man in a multi-colored woollen jumper. You’ll really see this if you zoom in on detail, particularly with textiles. That’s why Schiller calls it “mad science."Ĭopious amounts of image detail, impressive dynamic range and very low noise. In that time, Deep Fusion on your A13 chip goes through every pixel on the image (all 24 million of them) to select and optimize each one of them for detail and noise – all in a second. How Deep Fusion worksĭeep Fusion fuses nine separate exposures together into a single image, Schiller explained. What that means is that when you capture an image while in this mode, your iPhone’s camera will capture four short images, one long exposure and four secondary images each time you take a photo.īefore you press the shutter button it’s already shot four short images and four secondary images, when you press the shutter button it takes one long exposure, and in just one second the Neural Engine analyses the combination and selects the best among them. Deep Fusion uses advanced machine learning to do pixel-by-pixel processing of photos, optimizing for texture, details and noise in every part of the photo.”ĭeep Fusion will work with the dual-camera (Ultra Wide and Wide) system on the iPhone 11. It also works with the triple-camera system (Ultra Wide, Wide and Telephoto) in the iPhone 11 Pro range. “Deep Fusion, coming later this fall, is a new image processing system enabled by the Neural Engine of A13 Bionic.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |