Tag Archives: full frame vs aps-c

Debunking the Myth of Full Frame Superiority (Again)



Equivalence does not prove the superiority of full frame (FF) over crop sensor (APS-C, M43) cameras. In fact, equivalence shows that equivalent photos are equivalent in terms of angle of view, depth of field, motion blurring, brightness, etc…including QUALITY. Yes, equivalent photos are equivalent in quality.

Refer to the illustration above where we have a full frame lens (BLUE) on a full frame sensor (GREEN). Some full frame sensors are now capable of shooting in crop mode (APS-C) where only the area in RED is used. When a crop sensor LENS is used on a full frame sensor, only the area in RED is illuminated and the rest of the areas in GREEN are in complete darkness and therefore do not contribute to light gathering. This is also true when the full frame is forced to shoot in crop mode with a full frame lens; the camera automatically crops the image to the area in RED and the rest of the areas in GREEN are thrown away.

As per the illustration above, we can see that the central half of the full frame sensor is really just an APS-C sensor. If indeed, a crop sensor is inferior in terms of light gathering then logic will tell us that every center of a full frame shot will be noisier than the rest of the frame. We know this is not true. The light coming in from the lens spreads evenly throughout the entire frame. Total light is spread over total area. As a matter of fact, the central half is the cleanest because lenses are not perfect and become worse as you move away from the center.

Now suppose we have a full frame 50mm lens in front of a full frame sensor. Notice that the crop mode area (RED) does not capture the entire image that is projected by the 50mm lens. The angle of view is narrower than full frame (GREEN). There are several ways we can capture the entire 50mm view while using crop mode:

  1. move backward
  2. use a wider lens (approx 35mm)

Both methods allow the RED area to capture more of the scene. A wider scene means more light is gathered. It means that if we force the RED area (APS-C) to capture exactly the same image as the GREEN area (FF) we will be forced to capture more light! More light means less noise! In equivalent images, APS-C is actually cleaner than full frame!!!

For example, if we go with option #2 using a wider lens, equivalent photos would be something like this:

RED (APS-C): 35mm, 1/125, f/5.6, ISO 100
GREEN (FF): 50mm, 1/125, f/8, ISO 200

This is exactly what the equivalence theory proposes. The difference in f-stop is to ensure that they have the same depth of field given the same distance to subject. The faster f-stop for APS-C (f/5.6) guarantees that TWICE more light is gathered. Notice that the full frame is now forced to shoot at a higher ISO to compensate for the lesser light coming in due to a narrower aperture given by f/8. So if we are to use the same camera for both shots, say, a Nikon D810 to shoot in normal mode with a 50mm lens and in crop mode using a 35mm lens, the crop mode image will be noticeably better. In equivalent photos, crop mode comes out one stop better. In equivalent photos, the smaller sensor results in BETTER quality!!!

The story does not end here though. The full frame shot has twice the area of the crop mode shot. If both images are printed at the same size, the crop mode shot will need to be enlarged more than the full frame shot. Enlargement results in loss of quality and the full frame image will have an advantage over the crop mode image. Whatever the crop mode shot gained by the increase in gathered light is lost by a proportional amount during enlargement. In the end, both full frame and crop mode shots result in exactly THE SAME print quality!!!

Bottomline, full frame will not give you cleaner images than crop sensors, assuming that they are the same sensor technology (e.g. D800, D7000, K5). They will result in equivalent print quality if forced to shoot equivalent images.

Full frame superiority busted!


Debunking Equivalence Part 2

In my previous post Debunking Equivalence I covered in detail the major flaws of this concept called “equivalence”. Mind you, not everything in equivalence is wrong. Equivalence in field of view and depth of field make total sense. What does not make sense is the equivalence in exposure. This “exposure equivalence” is what full frame fanbois sell to unsuspecting gear heads. It is supposed to prove that full frame is superior to APS-C, m43 and smaller sensor cameras. 

In this post, I will use basic math to debunk the myth. Just enough math that I learned when I was in first grade — seriously.

Recall the equivalence comparison between m43 and full frame:

m43: 25mm, f/5.6, 1/125s, ISO 100

FF: 50mm, f/11, 1/125s, ISO 400

Ignore the ISO settings for now. Let us concentrate on the f-stop and shutter speed settings. The reason, they say, that f/5.6@125 is equivalent to f/11@125 is that both gather the same amount of light by virtue of the difference in focal length. The longer 50mm lens will have the same entrance pupil diameter as the 25mm lens. The difference in focal length of course is proportional to the sensor size. 

Now let us use arbitrary units of measure and consider the ratio X/Y, where X is the total amount of light and Y is the sensor size. Supposing that for m43 we have the ratio 4/8, a full frame sensor (4x area), according to equivalence, would have a ratio 4/32. Again:

m43 = 4/8 vs FF = 4/32

So total light is constant at 4 units and the denominators are the respective sensor size units 8 and 32 for m43 and full frame respectively. Still with me? Obviously, they are not the same. Not in any known universe. This is why for the same amount of light, the full frame will come out two stops underexposed. And this is why equivalence fanbois will insist that an increase in ISO is necessary; the full frame shot is very dark! Now we know that bumping the ISO does not increase the amount of light but will make the image brighter. I’m not sure how to represent that in numbers because nothing has changed really in terms of light. ISO bump is fake. It’s a post-capture operation that does not change the exposure or captured light. Furthermore, an ISO bump introduces noise and that is why equivalence forces the comparison to be performed at the same print size. This method of cheating does miracles for the fanbois. Let’s see how it works:

If we agree to compare at the same print size of 16 units, we now have 

m43: 4/8 upsampled to 4/16

FF: 4/32 downsampled to 4/16

Magic!!! They are now the same! They are equivalent! This is true for any print size therefore equivalence is correct!!! Therefore full frame is superior because at the same f-stop it will have gathered more light! 

Well, not so fast! The amount of light did not change. It was constant at 4 units. The apparent changes in signal performance was not due to light but due to resampling. Do not equate resampling to total light. They are not the same and are completely independent of each other. Resampling is like changing your viewing distance. I can make my crappy shot look cleaner simply by viewing it farther away. Did I change total light by moving away from the image? Stupid isn’t it?

That is the very naive reasoning behind equivalence. Not only is the conclusion stupid but the assumption here is that there is absolutely NO NOISE! Noise is ever present. Noise is present and proportional to the incoming light and a property of the sensor itself. Let’s see what happens when we introduce noise. 

Supposing that noise is 1 unit. We now have:

m43: signal = 4/8, noise = 1/8

FF: signal = 4/32, noise = 4/32

Therefore the signal to noise ratio (SNR) are as follows:

m43 = 4:1

FF = 4:4 or 1:1

The full frame is obviously inferior! It makes sense because it was underexposed by two stops (4x)!!! If you boost the signal by increasing the ISO you are boosting noise as well. In low light situations where noise is more pronounced, a 4:2 SNR for m43 will be 4:8 for full frame. There is more noise than signal in the full frame image! At 4:4 SNR for m43, full frame is at 4:32. There is nothing but noise in the full frame. You just can’t underexpose indefinitely and bump the ISO! That doesn’t work in all situations. This is why images at higher ISOs look bad. There is more noise than signal in low light situations. Yet, equivalence fanbois will try to convince you that ISO 6400 on m43 is the same as two stops underexposure plus ISO 25600 in full frame. It’s not. 

So again, the equivalence fanbois could not accept this fact. At the sensor level, equivalence has made the full frame look really bad. What can they do? Cheat again! Force the comparison to use the same print size. At a print size of 16 units, noise will be increased or decreased proportional to how much you upsample or downsample. We have:

m43: signal = 4/16, noise = 2/16

FF: signal = 4/16, noise = 2/16

So now the SNR for both are equal at 4:2! Can you see how they manipulate the numbers? They are using image size (number of pixels) to circumvent noise and stupidly equate this to light gathering. The total amount of light has not changed. How could anyone possibly attribute the changes in SNR due to resampling to light? It does not make any sense at all! Look closely though because this SNR is for the entire image. Most of the signal will be concentrated on the brighter areas. In the darker areas noise will show its teeth. In instances were full frame is severely underexposed (SNR 4:32) there is no saving it. It would look crap. M43, on the other hand will happily chug along with 1:1 SNR or better. 

This is why when you start comparing two full frame cameras with different resolutions you will notice variations in SNR results at different print sizes (Megapixel Hallucinations). If the SNR changes even when the sensor size is held constant then obviously sensor size does not matter. Therefore total light, being proportional to sensor size, by itself does not tell the whole picture. What matters is the RATIO of total light to sensor size, otherwise known as EXPOSURE. For SNR to be the same, exposure must be the same for sensors with the same properties (i.e. sensel pitch, efficiency, etc…). Size does not matter. 

Equivalence debunked…again!

Blinded By Light

The full frame protagonists are at it again. It’s the same stupid argument. Larger sensor means more TOTAL light gathered therefore lesser noise. Stop the bullshit. Please!!!

In fairness, it’s quite easy to be mislead by this kind of misinformation. If noise is inversely proportional to the amount of gathered light then it makes sense that a larger sensor would result in cleaner photos. Unfortunately, just looking at the total amount gathered light is being very short-sighted. It does not give us the whole picture (no pun intended).

Allow me to explain it again for the nth time. But before that, please read the following articles because they explain this concept in greater detail.

1. https://dtmateojr.wordpress.com/2014/02/28/understand-your-lens-part-3/ — Concentrate on understanding the effect of focal length on light intensity because a lot of people tend to ignore this bit. They are too preoccupied with just the aperture opening maybe because they are more familiar with “fast” lenses without even understanding what “fast” really means.

2. https://dtmateojr.wordpress.com/2014/03/08/debunking-the-myth-of-full-frame-superiority/ — If there is one thing that you’d want to fully understand here, make it the “thought experiment” on dividing a full frame sensor which also explains how shutter curtains work.

3. https://dtmateojr.wordpress.com/2014/06/10/debunking-the-myth-of-full-frame-superiority-part-2/ — This is a good counter-argument to the fact that no two digital sensors are exactly the same even if they are of the same type. A D7000 sensor for example is almost every bit the same as the D800 sensor but because of the improved processing the latter may produce better photos. And so I used film as an example because the same film emulsion will always behave the same way regardless of format (size).

4. https://dtmateojr.wordpress.com/2014/05/19/megapixel-hallucinations/ — Some full frame protagonists insist on comparing ENLARGED APS-C images to full frame equivalents in terms of noise. Of course an enlarged APS-C photo will, for the lack of a better word, enlarge everything including noise. This article debunks that by showing the MATH behind resampling as well as showing samples of real SNR measurements of APS-C and full frame sensors.

5. https://dtmateojr.wordpress.com/2014/04/21/rain-can-teach-us-photography/ — explains what happens in a sensor and why PIXEL size and NOT sensor size matters in greater detail.

6. https://dtmateojr.wordpress.com/2014/05/09/understanding-exposure/ — are for the equivalence clowns who think that they could get away with the bullshit by manipulating ISO. In short, you can’t.

Now if after reading those articles you still need a bigger cluebat then read on…

The biggest mistake that full frame protagonists make is that they equate a sensor to a solar panel. In a solar panel, total light gathered is everything. In a solar panel, every “sensor” contributes to the total energy produced. Of course the bigger the better. Photography though is far from being like a solar panel. Camera sensor pixels are independent of each other. That’s why within an image you will encounter darker parts that are more noisy and blown up parts that have been saturated by light. Each individual pixel receives its own independent number of photons. Pixels can’t share their photons with other pixels. Well sometimes adjacent pixels do “share” photons but this is an undesirable effect called “sensor bloom”. You can see why looking at noise as a result of the total light gathered is wrong. Noise should be examined at the pixel level because this ultimately defines the efficiency of your sensor.

While it is true that a larger sensor gathers more light compared to an APS-C or M43 for the same exposure by virtue of the larger area, this argument is not photographically sound. Photographic exposure is all about LIGHT PER UNIT AREA and not just total light. Saturating a pixel only requires a fixed number of photons. Anything more than that is just wasted light because as soon as a pixel clips then “no data” is presented for processing into an image. An APS-C sensor for example requires half the total amount of light required for a full frame sensor. If you force the same amount of light to both a full frame and APS-C then the latter will oversaturate, i.e. overexpose. It’s like pouring two liters of water into a one liter container. It does not make sense. It is photographically disastrous and plainly stupid. Therefore you get the same noise-free image in an APS-C for half the total amount of light hitting the sensor. Again, you get the same noise-FREE image for HALF the TOTAL amount of light. Again, it’s all about LIGHT PER UNIT AREA and NEVER just total amount of light gathered. It’s all about light intensity.

A smaller area requires lesser incident light. A smaller sensor requires a smaller lens-projected image circle. Smaller image circle is what defines a “crop” sensor or crop lens (e.g. Canon EF vs EFS lenses or Nikon FX vs DX lenses). You crop a full frame image circle just enough to illuminate a smaller sensor. Makes sense?

Unfortunately, there are those that remain blind and they resort to other stupid arguments such as printing at the same size or enlarging a cropped image. Of course a larger sensor is capable of larger prints but this has got nothing to do with light. But let’s be stupid for a minute and assume that a cleaner print is the result of more light gathered during exposure. What happens then if you print at a smaller size? Did you just throw away the light? If not, then where did the light go? If print size has got anything remotely related to light then projecting the same amount of light into a smaller print is like pouring two liters of water into a one liter container so we expect the smaller print to be overexposed, right? But it doesn’t. Because print size has got nothing to do with light and therefore has got nothing to do with noise. The apparent increase in noise when you enlarge a print is NOT the effect of light but the effect of resampling (refer to megapixel hallucination article), i.e. resolution.

In conclusion, total amount of light is just half the truth. The other half is sensor area. Combining both, we get light per unit area otherwise known as photographic exposure. Exposure is what ultimately dictates noise. Smaller area requires lesser light therefore the same exposure results in the same noise profile for different sensor sizes of the same type.

I hope this is the last time I will ever write about this topic. It’s getting long in the tooth and very boring really.

I promise to write a happier article next time. Really. I promise that. 

Debunking the Myth of Full Frame Superiority Part 2

If you are here to understand (why) equivalence (is wrong) then read this: https://dtmateojr.wordpress.com/2014/09/28/debunking-equivalence/

This article is a continuation of my previous post that stirred multiple different forums. I suggest that you read it first before going through this article. Here’s the controversial post: https://dtmateojr.wordpress.com/2014/03/08/debunking-the-myth-of-full-frame-superiority/

Anyway, in an unrelated forum post, a dpreview article was quoted about the benefits of a certain Sigma lens. The quote went like this:

“Sigma’s choice of F1.8 as maximum aperture isn’t a coincidence; it means that the lens will offer the same control over depth of field as an F2.8 zoom does on full frame. What’s more, it will also offer effectively the same light-gathering capability as an F2.8 lens on full frame. By this we mean that it will be able to project an image that’s just over twice as bright onto a sensor that’s slightly less than half the area, meaning the same total amount of light is used to capture the image. This is important as it’s a major determinant of image quality. Essentially it means that APS-C shooters will be able to use lower ISOs when shooting wide open in low light and get similar levels of image noise, substantially negating one of the key advantages of switching to full frame.

Dpreview may be experts in reviewing photographic gear but it looks like they know nothing about photography itself. That article is completely WRONG. A f-stop is a f-stop. Period. Full stop! A f/2.8 lens will always let through lesser light compared to f/1.8 REGARDLESS of format, be it full frame or APS-C or m43.

I found another very very simple proof: FILM!!!

Yes, you read that right. FILM.

Back when the word “photographer” actually meant something, people shot on film. What’s interesting to note is that a particular film emulsion is often made available for different formats. The famous Kodak Ektar 100 for example is available in either (the measly) 35mm “full frame”, in the much larger 120 medium format and even in ginormous 8×10 sheets! The sizes may be different but the emulsion remained constant.

Those who are interested can check out the data sheet for Ektar here: http://www.kodak.com/global/en/professional/support/techPubs/e4046/e4046.pdf

Here’s the datasheet for Fuji Velvia: http://www.visionimagelab.com.au/_literature_85837/FUJICHROME_Velvia_100F_Professional_%5BRVP100F%5D

Same emulsion, same response, same everything except size!

If indeed, a larger sensor has more light gathering capability compared to smaller sensors then the same film in different film sizes would need to have different emulsions right? If the myth is true then larger films will have to be less sensitive or they will overexpose. Those shooting with 8×10 view cameras will be overexposing their shots if they follow the same data sheet for 35mm film! We know that’s not true. The same sunny f/16 rule applies to 35mm, 120 or 8×10. The same emulsion behaves exactly the same whether it’s 35mm or 120 or 8×10.

There are some panoramic cameras that allow you to shoot in square format as well just by inserting a mask that blocks the sides of the film. You get to use the same film and lens. Now the shooting instructions don’t change. You still expose the film as if you were shooting a panoramic format. If equivalence was even remotely valid then you would have to change your f-stop and/or shutter speed but you don’t. Same lens, same film, same f-stop and shutter speed even if the film size has changed. Film size does not matter!

Plain and simple! Myth has been truly busted the second time!


Megapixel Hallucinations

If you are here to understand (why) equivalence (is wrong) then read this: https://dtmateojr.wordpress.com/2014/09/28/debunking-equivalence/

This post is practically a continuation of one of my controversial posts on debunking the myth of full frame superiority. In that previous post I discussed why full frame is actually no better than it’s crop sensor counterpart (Nikon D7000 vs D800) in terms of light gathering capability. Now I will try to discuss another aspect of full frame superiority and explain why it leads people to believe that it is superior to smaller sensor cameras when in fact it is not.

A common source of sensor performance data is DXOMark. This is where cameras are ranked in terms of SNR, DR, Colour depth, and other relevant aspects of the equipment. It is important to note that data from this website should be properly interpreted instead of just being swallowed whole. This is what I will try to cover in this post.

One of the most highly debated information from DXOMark is that of low light performance which is measured in terms of Signal to Noise Ratio (SNR). SNR is greatly affected by the light gathering capacity of a camera’s sensor and this is why this is commonly used to compare the low light performance of full frame and crop sensors. This is also the most misinterpreted data by full frame owners. They use this information to justify spending three times as much for practically the same camera. Let’s see why this is wrong…

Consider the following SNR measurements between the Nikon D7000 and D800:


Isn’t it quite clear that the Nikon D800 is superior to the D7000? Did I just make a fool of myself with that “myth debunking” post? Fortunately, I did not 🙂 I’m still right. That graph above is a normalised graph. DXOMark is in the business of ranking cameras and that is why they are forced to normalise their data. Let’s have a look at the non-normalised graph to see the actual SNR measurements:


Didn’t I say I was right? 🙂

The Nikon D7000 and D800 have the same low light performance! That is because they have the same type of sensor. The D800 is basically just the D7000 enlarged to full frame proportions. Simple physics does not lie. A lot of “photographers” have called me a fool for that “myth debunking” post. Well, I’m not in the business of educating those who are very narrow-minded so I will let them keep believing what they believe is true. But some of us know better, right? 🙂

Let’s not stop here. Allow me to explain why the normalised graphs are like that.

Let me tell you right now that DXOMark is unintentionally favouring more megapixels. That’s just the inevitable consequence of normalisation. Unfortunately, those who do not understand normalisation use this flaw to spread nonsense. The normalised graphs are not the real measured SNR values but are computed values based on a 8Mp print size of approximately 8×10. The formula is as follows:

nSNR = SNR + 20 x log10(sqrt(N1/N2))

where nSNR is the normalised SNR, N1 is the original image size and N2 is the chosen print size for normalisation. In the case of the Nikon D800, N1 = 36Mp and for the D7000, N1 = 16Mp. They are both normalised to a print size of N2 = 8Mp. Based on that formula, the D800 has a SNR improvement of 44.93 up from measured SNR of 38.4. The D7000 though only improves a tiny bit to 41 up from 38. As you can see, although both cameras started equally, the normalised values have now favoured the D800.

This increase in SNR is not because the D800 has better light gathering capability. This apparent increase in SNR is due to downsampling. It’s due to the larger image size and not because of better light gathering capability. Unfortunately, this computed SNR is what the full frame fanbois are trying to sell to uninformed crop sensor users. It is the REAL measured SNR that matters and we will learn later on how important this is compared to just more megapickles.

Go back to that normalisation formula and note the term inside the square root (N1 / N2). Note that if N1 is greater than N2 then the log10 becomes a positive number and the whole term adds to the measured SNR. The term drops to zero for N1 = N2 and that’s why when a D800 image is printed at 36Mp, the SNR is the measured SNR. Same goes for the D7000 when printed at 16Mp. That is why when I blogged about noise performance comparisons I kept repeating that images should be printed at their intended sizes. That’s the ONLY fair comparison. Downsampling is cheating. You do not want to buy a 36Mp camera so you could print it at 8×10. That is an absolute WASTE of money.

The idiots will of course justify by saying “well the good thing with having a larger image is that you can downsample and it will outperform a smaller image“. Well not so fast, young grasshopper. That is not true. We know that SENSEL size generally results in better light gathering capacity (Rain Can Teach Us Photography) although this means smaller image size. Let’s consider the D800 vs D4:


So the real SNR shows the D4 (42.3) being superior compared to the D800 (38.4). Again, when normalised to a 8Mp print, the D800 somehow “catches up”:


Unfair isn’t it? Well, only for smaller prints. Using the same formula to compute the SNR in a 16Mp print, the D4 drops to its real measured SNR of 42.3 while D800 SNR drops to 41.92. So now the D800 is inferior to the D4! How about for a 36Mp print? The D4 drops to 38.77 and the D800 drops to its real measured SNR of 38.4. The 16Mp D4 upsized to a whooping 36Mp print BEATS the D800 in its own game!!!

In the comparison above between two full frame cameras we see that even if the total amount of light, which is proportional to the sensor size, does not change, variations in SNR can occur if resampling is added into the equation. Clearly, total light and resampling are unrelated. Just because one sensor has better noise performance at a given print size does not imply that it has better light gathering capacity. If 8Mp was the only print size we could make, one would think that the D800 is every bit as good as the D4. This is clearly not the case at larger print sizes where the D4 outshines the D800. The same argument can be said for comparisons between sensors of different sizes. Sensor performance should not be judged based on arbitrary print sizes. Sensor performance must be taken at the sensor level. 

Think about it: every time you print smaller than 36Mp, you are WASTING your D800. Who consistently prints larger than 16Mp or even 12Mp? As you can see, the superior 16Mp sensor makes a lot more sense. The D800 is a waste of space, time, and money.

In essence, a 16Mp sensor, be it full frame or crop can beat the 36Mp D800 if it has high enough SNR. The crop sensor need not match the superior D4 sensor. A 16Mp crop sensor with the same SNR performance as the 7-year old Nikon D700 will beat the D800 at print sizes of 16Mp and higher.

Let’s summarise what we have covered so far:

0. DXOMark data needs to be analysed. Better SNR performance in normalised data does NOT imply better light gathering capacity of full frame sensors but merely a consequence of larger image size in terms of megapixels.

1. DXOMark normalises their data because they are in the business of ranking cameras.

2. Normalisation to a small print size unintentionally favours sensors with more megapixels.

3. More megapixel does not necessarily lead to superior SNR when downsampled.

4. At larger prints (16Mp and higher), the weakness of the 36Mp D800 sensor begins to show.

5. A good quality crop sensor camera with lesser megapixels can beat a full frame camera with insane megapixels.

Do you believe me now?

Rain Can Teach Us Photography

Before I start the discussion, I would like to refer you to my previous posts because I have already covered this concept extensively:



If you have read and understood those posts above then you can save yourself time by not reading this one…although I don’t mind if you do read this because this looks at sensors in a different perspective. I will try to cover some technical aspects in the form of analogies.

I noticed that not everyone who carries a camera actually understands photography so I’m hoping that my explanations here would help them. I will use rain as an analogy. I hope everyone has experienced rain and understands rain.


Do you know how they measure the amount of rainfall? They use a device called a rain gauge. It’s a very simple device and anyone can make their own. The simplest rain gauge is that of a basic straight container that looks something like this:


All you have to do now is wait for rain over a period of time and then measure the height of the collected rainwater. Really simple. Now you might wonder why I did not specify any measurements. How big should the container be? Surely, a larger container will gather more rain! That is correct. A larger container will collect more rain but the rainwater level will remain the same. Why is that? Because a larger opening also needs to fill a larger volume. That’s why rain is measured in mm (height) and not ml (volume). Here’s a very simple math that explains this:

Volume = area of opening x height of rainwater


height of rainwater = Volume / area of opening

Notice that if you increase the opening you also increase the volume and the height remains the same. They are proportional.

What does this teach us about photography? The concept of a rain gauge is analogous to that of photographic exposure. First you have your container opening which is your aperture. Then you wait for rain to fall over a period of time which is your shutter speed. The proportion of the container opening to its volume is your f-stop. A rain gauge is like photography where everybody has agreed to shoot at the same f-stop. The measured rain level is your exposure and determines how bright the image will turn out. As we have mentioned before, the size of the opening is proportional to the volume and therefore the measured rain levels are the same irrespective of container size. Recall that a f-stop is the ratio of the lens focal length (container height) and aperture diameter (container opening). That’s why a f-stop is a f-stop. A f-stop is the same no matter how big your lens is. A 35mm at f5.6 will have the same exposure as 100mm at f5.6 although the latter has a much much larger opening.

It follows that if we use the same lens on different sensor sizes we get something that looks like this:


The red circle is your lens’es image circle (area of rain). The rectangles in the middle are your different sensor sizes (rain gauges). From our rain gauge analogy above, you will realize that both sensors will have the same exposure (measured rain levels). A f-stop is a f-stop irrespective of sensor size.

Before you continue with the rest of the discussion, make sure that you understood the very basic concepts covered above because I will start explaining something that is often hotly debated in forums: sensor noise!

The full frame proponents will tell you that because of sensor size advantage over APS-C and 4/3rds and other “crop” sensors that it will have lesser noise and therefore cleaner output. The basis for this conclusion is the fact that larger sensors gather more light. Let’s discuss this in detail…

If you go back to our rain gauge analogy, it’s quite obvious that a larger container will gather more rain although the measured rain levels remain the same. Therefore a larger sensor will gather more light although the exposure remains the same. Now since noise is affected by the amount of light, therefore a larger sensor has lesser noise. Therefore, full frame is superior.

Well not so fast Pedro. A camera’s sensor isn’t really like a rain gauge. A sensor is actually composed of smaller components called sensels which are like smaller containers within a bigger container. It looks like this:


So now we will have to narrow down our analogy to those smaller containers (sensels) instead of the whole sensor. You could probably see where this is going. You will notice light is gathered by the sensels and therefore noise is NOT affected by sensor size but by SENSEL size.

Again, sensor size does not affect exposure (rain level analogy). A f-stop is a f-stop (rain gauge analogy) and is not affected by lens focal length or sensor size. Therefore a smaller sensel will have the same exposure as a larger sensel. However, a larger sensel gathers more light therefore it will have lesser noise. This is why a 12Mp full frame has better noise performance vs a 12Mp APS-C sensor. This is also why a 12Mp full frame Nikon D700 has way better noise performance vs a 36Mp full frame D800 by virtue of the larger sensels. This is also why a 16Mp APS-C D7000 has the SAME noise profile as a full frame 36Mp D800.

And thus, we arrive at the following conclusions:

1. SENSOR size has no effect on exposure.

2. SENSOR size has no effect on noise.

3. SENSEL size ultimately affects noise.

Again, it follows from above that the same sensors will have the same noise profiles (e.g. Nikon D7000, Pentax K5/K5II and Nikon D800) even if the sensor sizes are different as long as they are exposed in the same way; same f-stop, same shutter speed. You will find shot comparisons between those sensors in http://dpreview.com and they are in agreement with the conclusions above.

Now you might have read from others about something called equivalency. They say that unless different sensor sizes are exposed equivalently then they will have different noise profiles. For example, a APS-C sensor with a lens set to 35mm/f5.6 is equivalent to a full frame sensor with a lens set to 50mm/f8. Although they have different focal lengths and f-stops, they are equivalent in terms of angle of view, aperture and depth of field, all because of the crop factor of approximately 1.5. While it’s true that AoV and DoF are equivalent, they will certainly have different noise profiles. Firstly, it’s quite obvious that if you use the same shutter speed, the 35mm/f5.6 will be overexposed by a stop. So if you use the same shutter speed, you will have to stop down the 35mm to f8 which will decrease the aperture size and somehow this will affect noise?! We know from the rain analogy and sensor design discussion above that this is simply UNTRUE! I don’t know why the equivalency proponents keep pushing this concept when photographically it does not make sense. This equivalency-fu is like using rain gauges that do not adhere to the same standards and they will end up looking like this:


Notice that they have the same opening area of 6.25mm:

35mm / 5.6 = 6.25mm

50mm / 8 = 6.25mm

But because of this equivalency brouhaha we now have skewed rain gauges. Notice that you will have to gather rain over a longer period of time for the 50mm/f8 container to arrive at the same rain level as the 35mm/f5.6 container. Full frame proponents think that they are the standard so the illustration above would probably look like this from their perspective:


So now the crop sensors will have to expose at a shorter period of time just so they could abide by the standards set up by the elite full frame shooters.

For me, this is just silly. I feel that all these comparisons between full frame and smaller sensors are nothing but silly justifications for the perceived superiority of a particular sensor. Discussing equivalency is fine as long as it’s still about photography. It’s ok if you explain crop factor in terms of AoV or DoF but when you start using this as a tool to push the perceived superiority of your more expensive equipment then it’s really just bullshit. Bullshit and downright very wrong and misleading. Stop it.

















Full Frame Is Not An Option

Not too long ago, getting a full frame camera actually mattered. It wasn’t just because real photographers have a collection of lenses that were meant for 35mm film but full frame sensors were actually different from their crop counterparts. Different and better.

Back then, a 12Mp APS-C sensor camera was fairly common. Canon, Nikon, Sony and Fuji had their own versions of the 12Mp DSLR. Those who had deeper pockets could go full frame. Although the resolution remained the same at 12Mp, the full frame sensors guaranteed better image quality especially in low light performance. The “affordable” and ancient Canon 5D and Nikon D700 can still hold a candle even against the latest full frame cameras of today.

It is quite frustrating that sensor development hasn’t really improved that much since then. Some full frame sensors are really just bigger crop sensors. Take the Nikon D800 36Mp full frame sensor for example which is just an enlarged D7000 16Mp APS-C sensor so we expect their performance to be the same (https://dtmateojr.wordpress.com/2014/03/08/debunking-the-myth-of-full-frame-superiority/):



(Image taken from dpreview.com)

Yes, there are other sensors such as the 24Mp of the D600 and the 16Mp of the D4 but if you compare their performance against the 7-year old 12Mp D700 sensor I would expect the huge technology gap to provide me with at least something that is visibly much improved. Have a look at the comparison below. Aside from the image size differences, the newer sensors have got nothing to say against the D700. In fact the D800 is visibly inferior:




(Image taken from dpreview.com)

So in terms of performance, my opinion is that full frame is NOT really any better than APS-C (D800 vs D7000) and newer is NOT necessarily better (D600 or D4 vs D700).

What then has full frame got to offer compared to smaller sensors?

In some cases, full frame is really just a waste of space (https://dtmateojr.wordpress.com/2013/12/11/resolution-and-sharpness/), so unless you make really huge prints, if you print at all, then there is no point in going for more megapixels.

Another issue that you have to deal with full frame cameras is depth of field. Shallow DoF, in the real world, is a problem NOT a feature. You will be forced to stop down to gain enough DoF and then you start dealing with diffraction issues.

Full frame cameras require bigger, more expensive lenses and bigger tripods. Bigger and heavier means lesser usage. Unless you have pre-planned trips, you will tend to leave your full frame in the closet. Everyone now has a camera. More people are bringing tiny m43 cameras that produce full frame quality images so you can’t afford to leave yours behind if you wish to be competitive. Of course, there is now a trend towards smaller cameras with full frame sensors. The problem here is that they require a new set of lenses unless you decide to go with their wonky adapters which essentially negate the whole point of having a smaller camera.

On a slightly different topic, I find it funny that Pentaxians are practically switching over to a different brand just to satisfy their full frame cravings. Even if it means selling all their Pentax gear to fund the purchase of an entirely new set of equipment. Even if it means losing a lot of money. Well if you really like something then buy it. There is no need for excuses. Thinking that switching to a full frame will help you make better photos is a thing of the past. It doesn’t apply anymore. I mean honestly, unless you already have a stash of full frame lenses, going full frame is no longer a good option.