Home » today » Technology » The 48MP sensor wasn’t the camera’s biggest news this year

The 48MP sensor wasn’t the camera’s biggest news this year

Let’s talk about pixels. specially, iPhone 14 pixels. More specifically, iPhone 14 Go pixels. As the headlines suggest that the latest Pro models offer a 48MP sensor instead of 12MP, it’s not the most significant improvement Apple has made to the camera this year.

In fact, in the four Of the biggest changes this year, the 48MP sensor is the least important to me. But have patience here, there is a lot we need to unpack before explaining why I think the 48MP sensor is far less important than this:

  • Sensor size
  • pixel fixing
  • Photonic machine

One 48MP sensor, two 12MP

Colloquially, we are talking about the iPhone camera in the singular and therefore we are referring to three different lenses: main, wide angle and telephoto. We do this – that’s how DLSR and mirrorless cameras work, one sensor, multiple (interchangeable) lenses – and it’s the illusion Apple creates in the Camera app.

The reality is, of course, different. iPhone already has three cameras granules. Each camera module is separate and each has its own sensor. When you press the 3x button, you don’t choose a telephoto lens, you switch to a different sensor. When you move the zoom, the camera app automatically and invisibly selects the appropriate camera drive e later, later Makes the desired yield.

The main camera module alone has a 48MP sensor; The other two drives are still 12MP.

Apple was pretty early on this when it introduced new models, but here’s an important detail that some have missed (our goal):

For the first time, the Pro series has a new feature 48 MP main camera Image quality is captured by a quad-pixel sensor and features second generation optical image stabilization.

The 48MP sensor works part-time

Even if you use the main camera, with its 48MP sensor, you only take 12MP photos by default. Once again Apple:

For most images, a quad pixel sensor combines all four pixels into one large quad pixel.

When shooting at 48MP:

  • You are using the main camera (not telephoto or wide angle)
  • You are shooting in ProRAW (disabled by default)
  • Shoot in good light

If you want to do it, That’s how. But above all, it will not be …

Apple’s approach makes sense

You may be wondering, why would you give us a 48MP sensor and then not use it?

Apple’s approach makes sense because, in fact, there are Very In some cases, shooting at 48MP is better than shooting at 12MP. There is no point in having it as the default, as doing so creates very large files and eagerly consumes storage space.

I can think of only two scenarios where it would be useful to take a 48MP photo:

  1. Do you want to print the image in a large format?
  2. You have to crop the image too much

This second reason is also a bit debatable, because if you need to crop it significantly, it’s best to use a 3x camera.

Now let’s talk about the sensor size

When comparing a smartphone camera to a high-quality DSLR or mirrorless camera, there are two big differences.

One of them is the quality of the lenses. Standalone cameras can have very good lenses due to the physical size and cost. It is not uncommon for a professional or aspiring amateur photographer to spend four-figure sums on a lens. Smartphone cameras certainly can’t compete with this.

The second is the size of the sensor. All things being equal, the larger the sensor, the better the image quality. Smartphones, by the nature of their size, have very small sensors with separate cameras to accommodate all other technologies. (They also have limited depth, which imposes another important limit on the size of the sensor, but we won’t go into that.)

A sensor the size of a smartphone limits image quality and makes it difficult to achieve a shallow depth of field, which is why the iPhone does this artificially with portrait mode and cinematic video.

Apple’s large sensor + limited megapixel approach

While there are clearer and less obvious limits to the sensor size you can use in a smartphone, Apple has historically used larger sensors than other smartphone brands, which is part of why the iPhone has long been considered a quality camera phone. . (Samsung later moved on to do this.)

But there is a second reason. If you want better quality photos from a smartphone, you should too pixels It should be as large as possible.

This is why Apple has been religiously stuck at 12MB, while brands like Samsung have crammed the same size into the 108MB sensor. Compressing many pixels on a small sensor greatly increases noise, which is especially noticeable in low-light photos.

Well, it took a while to get there, but now I can finally understand why I think the larger sensor, pixel stabilization and optical engine are much larger than a 48MP sensor …

No. 1: iPhone 14 Pro / Max sensor 65% larger.

This year, the iPhone 14 Pro / Max’s main camera sensor is 65% larger than last year’s model. Obviously, it’s nothing compared to a standalone camera, but for a smartphone camera it’s huge!

But, as mentioned earlier, when Apple compresses four times the number of pixels in a 65% larger sensor, it actually delivers poor quality! That’s why you’ll most likely be taking 12MP photos. Thanks for that too …

# 2: pixel stabilization

Apple uses pixel stabilization technology to take 12MP photos on the main camera. This means that the four-pixel data is converted to a default pixel (average values), so a 48MP sensor is often used as the larger 12MP.

This explanation is simplified but gives the basic idea:

Pixel iPhone 14: Pixel stabilization explained

What does this mean? Pixel size is measured in microns (one millionth of a meter). Most premium Android smartphones have pixels in the 1.1-1.8 micron range. The actual pixel size of the iPhone 14 Pro / Max is 2.44 microns when the sensor is used in 12 MP mode. than it Indeed Notable progress.

Without pixel stabilization, the 48 MP sensor is – for the most part – low.

No. 3: optical engine

We know that smartphone cameras cannot compete with standalone cameras in terms of optics and physics.

Computational photography has been used in SLR cameras for decades. When changing measurement methods, for example, you instruct the computer inside the DLR to interpret the raw data from the sensor in a different way. Similarly with DSLRs and all mirrorless cameras, you can choose from a variety of image modes, which again tell the microprocessor how to adjust the data from the sensor to achieve the desired result.

So computational imaging actually plays a much bigger role in discrete cameras than many people believe. And Apple is very good at computational photography. (OK, I’m not that good with cinematic videos yet, but give it a few years …)

The Photonic Engine is a custom chip that supports Apple’s Deep Fusion approach to computational imaging and I really see a huge difference in the dynamic range in the photos. (Examples to follow in the iPhone 14 Diary next week.) Not just in the realm, but also in smart decisions Which shadow to eliminate the Which A shame

As a result, images that have as much to do with software as they do with hardware are noticeably better.

is contained

A significantly larger sensor (from a smartphone perspective) is important when it comes to image quality.

Pixel stabilization means Apple has actually made a much larger 12MP sensor for most photos, allowing you to realize the benefits of a larger sensor.

Optical Engine means a chip dedicated to image processing. I already see the real life benefits of this.

More to follow in the iPhone 14 diary as we put the camera through more extensive testing over the next few days.

FTC: We use affiliate links that generate revenue. additional.


For more Apple news, check out 9to5Mac on YouTube:

https://www.youtube.com/watch؟v=HvuVDebeKGE

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.