Let’s talk about pixels. Specifically, the iPhone 14 pixels. More specifically, the iPhone 14 Pro pixels. Because while the main news is that the latest Pro models offer a 48MP sensor instead of 12MP, this isn’t actually the most significant improvement Apple has made to the camera this year.
In fact, from four The biggest changes this year, the 48MP sensor is the least important to me. But bear with me here, because there’s a lot we need to unpack before I explain why I think the 48MP sensor is much less important than:
- Sensor size
- pixel binning
- Photonic actuator
One 48MP sensor, two 12MP sensors
Colloquially, we talk about the iPhone camera in the singular, then we refer to three different lenses: main, wide, and telephoto. We do this because it’s familiar — that’s how DLSR and mirrorless cameras work, one sensor, multiple (interchangeable) lenses — and because that’s the illusion that Apple creates in the Camera app, for simplicity’s sake.
The reality is, of course, different. iPhone already has three cameras units. Each camera module is separate, and each has its own sensor. When you press, say, the 3x button, you’re not just choosing the telephoto lens, you’re switching to a different sensor. When you move the zoom, the camera app automatically and invisibly selects the appropriate camera unit, and and then Do any necessary cultivation.
The main camera module has only a 48MP sensor; The other two still have 12MP units.
Apple has been pretty upfront about this when introducing the new models, but it’s an important detail that some may have missed (our focus):
For the first time ever, the Pro lineup has a feature 48MP primary camera With a quad-pixel sensor that adapts to the image being taken, and features second-generation optical image stabilization to shift the sensor.
48MP sensor works part-time
Even when using the main camera, with its 48MP sensor, you still take 12MP photos by default. Again Apple:
For most images, the quad pixel sensor combines all four pixels into a large quad pixel.
The only time you shoot 48MP photos is when:
- You are using the main camera (not telephoto or wide angle)
- You are shooting in ProRAW (which is off by default)
- You are photographing in a decent light
If you want to do that, here’s how. But mostly, you won’t…
Apple’s approach makes sense
You might ask, why give us a 48MP sensor and then we mostly don’t use it?
Apple’s approach makes sense, because in fact it exists very Few times when shooting at 48MP is better than shooting at 12MP. And since doing so creates much larger files, consuming your storage with a predatory appetite, it doesn’t make sense for this to be the default.
I can think of only two scenarios where taking a 48MP photo would be useful:
- Do you want to print the image in a large size?
- You need to crop the image badly
This second reason is a bit questionable, because if you need to crop heavily, you might be better off using the 3x camera.
Now let’s talk about sensor size
When comparing any smartphone camera to a high-end DSLR or mirrorless camera, there are two big differences.
One of these is the quality of the lenses. Standalone cameras can have much better lenses, both because of physical size and because of cost. It is not unusual for a professional or amateur photographer to spend a four-figure sum on a single lens. Of course, smartphone cameras can’t compete with that.
The second is the size of the sensor. All other things being equal, the larger the sensor, the better the image quality. Smartphones, by the nature of their size, and all the other technology you need to fit in, have sensors that are much smaller than standalone cameras. (They also have limited depth, which puts another big limitation on sensor size, but we don’t need to get into that.)
A smartphone-sized sensor limits image quality and also makes it difficult to achieve a shallow depth of field — which is why the iPhone does it artificially, with portrait mode and cinematic video.
Apple’s large sensor + limited megapixel approach
While there are clearer and less obvious limits to the size of the sensor you can use in a smartphone, Apple has historically used sensors that are larger than other smartphone brands — which is part of the reason the iPhone has long been seen as the right phone for a quality camera. (Samsung later switched to doing this as well.)
But there is a second reason. If you want to get the best possible image quality from a smartphone, you also want to pixels be as large as possible.
This is why Apple has religiously stuck to the 12MP resolution, while brands like Samsung have cramped up to 108MP in the same size. Compressing a lot of pixels into a small sensor significantly increases noise, which is especially noticeable in low-light photos.
Well, it took me a while to get there, but I can now, finally, say why I think the larger sensor, pixel binning, optical engine is a much bigger deal than a 48MP sensor…
#1: iPhone 14 Pro/Max sensor 65% larger
This year, the main camera sensor on the iPhone 14 Pro/Max is 65% larger than last year’s. Obviously, that’s still nothing compared to the standalone camera, but for a smartphone camera, that’s huge (pun intended)!
But, as mentioned above, if Apple compresses four times the number of pixels into a sensor that is only 65% larger, it will actually result in worse quality! And that’s exactly why you’ll still be taking 12MP photos. Thanks…
No. 2: Pixel Grouping
To shoot 12MP photos on the main camera, Apple uses pixel binning technology. This means that data from four pixels is converted to one default pixel (average values), so a 48MP sensor is mostly used as a larger 12MP one.
This illustration is simplified but gives the basic idea:
What does this mean? Pixel size is measured in microns (one millionth of a meter). Most premium Android smartphones have pixels measuring between 1.1 and 1.8 microns. iPhone 14 Pro / Max, when using the sensor in 12-megapixel mode, has pixels measuring 2.44 microns. this truly Big improvement.
Without pixel binning, a 48MP sensor would – most of the time – be a downgrade.
#3: Optical Engine
We know that smartphone cameras of course cannot compete with standalone cameras in terms of optics and physics, but where they can compete is computational photography.
Computational photography has been used in SLR cameras for literally decades. When you switch measurement modes, for example, this instructs the computer inside the DLR to interpret the raw data from the sensor in a different way. Likewise in DSLRs and all mirrorless cameras, you can choose from a variety of image modes, which again tell the microprocessor how to adjust the data from the sensor to achieve the desired result.
Therefore, computational photography actually plays a much bigger role in standalone cameras than many realize. And Apple is very, very good at computational photography. (OK, he’s not that good yet in cinematic video, but give him a few years…)
The Photonic Engine is the custom chip that supports Apple’s Deep Fusion approach to computational imaging, and I really see a huge difference in dynamic range in the images. (Examples to follow in next week’s iPhone 14 diary piece.) Not just the scope itself, but the smart decisions being made about Which shadow to take out the Which The highlight is tame.
The result is noticeably better images, which have as much to do with software as hardware.
A significantly larger sensor (from a smartphone perspective) is really important when it comes to image quality.
Pixel-binning means that Apple has effectively created a much larger 12MP sensor for most photos, allowing the benefits of the larger sensor to be realized.
An optical engine is a chip dedicated to image processing. I already see the real-life benefits of this.
More to follow in the iPhone 14 diary piece, when I put the camera to a more thorough test over the next few days.
#iPhone #pixels #48MP #sensor #isnt #biggest #camera #news #year