Tag Archives: resolution

Understanding the Effects of Diffraction (Part 2)

This article is a continuation of my previous post on understanding the effects of diffraction. That article has caused a long-winded discussion because some people decided to go deeper in the discussion of diffraction without fully understanding some fundamental concepts. Add to that some bogus resolution graphs and the discussion went from bad to shite.

In the interest of further learning, let’s go back to the very basic principles behind lenses and light.

LENS ABERRATION

The main purpose of a photographic lens is to focus light into the camera’s sensor. Ideally, an incoming point light source is projected into the sensor as a point. The reality is not quite that simple. Light rays near the center of the lens just pass straight through the glass without any problems. However, light rays that do not pass through the center will have to bend so as to meet with the other light rays towards the same focal point. The farther the light ray is from the center, the sharper it has to bend. The problem here is that lenses are not perfect. These imperfections or aberrations result in imprecise bending of light. Light rays near the edges of the glass don’t quite hit the focal point. Some of them will fall just before the sensor and some of them will fall after the sensor. The point light source then is projected into the sensor no longer as a point but something that is much larger. Refer to the simple illustration below. The red ray hits the focal point, the blue ray almost hits the focal point but the green ray which is very near the edge totally misses it.

Screen Shot 2014-10-21 at 8.29.24 pm

There are ways to work around lens aberrations. The most common method is by closing down the pupil to eliminate light rays that are near the edges of the lens. In photography, this is what happens when you close down or β€œstop down” the aperture. In the illustration below, the narrow pupil has eliminated the out-of-focus green ray leaving only the red and blue rays that are more focused.

Screen Shot 2014-10-21 at 8.27.30 pm

The result is a smaller projected point that is truer to the original point source. The overall image that is projected into the sensor will look sharper. The lens’es performance has therefore improved by utilising only the center of the glass by closing down the pupil. The downside though is that since the pupil has eliminated other light rays, the resulting image will also look darker. Bottom line is that you will have to trade sharpness with brightness.

DIFFRACTION

As discussed above, closing down the pupil improves the performance of the lens. You can make the pupil as narrow as you want and the lens performance will improve proportionally.

There is a problem though that is not quite the fault of the lens itself. This problem is attributed to a property of light. Light changes direction when it hits edges or when it passes through holes. This type of change of direction is called diffraction. Diffraction is ever present as long as there is something that is blocking light. So although a narrower pupil improves lens performance, light goes out-of-control when it passes through a narrow opening. The narrower the pupil, the more that light changes direction uncontrollably. It’s like squeezing a hose with running water. The tighter you squeeze, the wider the water spreads. In the end, light rays will still miss the focal point and we are back to the same dilemma where our point light source is now projected at a much bigger size on the sensor.

DIFFRACTION-LIMITED LENS

We are now ready to understand what a diffraction-limited lens means.

Recall that depending on the size of the pupil, light rays that are farther away from the center of the lens will miss the focal point thus causing a point light source to be projected much larger on the sensor. Let’s assume for now that this point source is projected with a much larger diameter, X, on the sensor.

Now forget for a moment that the given lens has problems and is perfect with no aberrations whatsoever. Recall that at the same pupil size, light diffracts (spreads) in such a way that will cause some of the light rays to miss the focal point and again resulting in a larger projected point of diameter Y.

So now we have two different sizes of the projected point: size X caused by lens aberrations and size Y caused by diffraction (assuming that the lens was perfect).

If X is smaller than Y then the lens is said to be diffraction-limited at that pupil size or aperture. This means that the main contributor to image softness is diffraction instead of lens imperfections. The optimum performance of the lens is the widest aperture in which X remains smaller than Y. Simple.

If X is larger than Y, the problem becomes a bit more complicated. It means that lens imperfections are more dominant compared to diffraction and therefore you can choose to make the aperture narrower to improve lens performance. Stopping down will of course decrease X but will increase Y. It becomes a delicate balancing act between lens imperfection and diffraction. This is a common problem with cheap kit lenses. At larger apertures, kit lenses have aberrations so bad that the image they produce look soft. So you stop down to f/8 or f/11 and by then diffraction kicks in causing the image to soften. It’s a lose-lose situation. That is why premium lenses are expensive. They are sharp wide open where diffraction is very negligible.

A lens that is diffraction-limited at f/5.6 is considered very good. A lens that is diffraction-limited at f/4 is rare. A lens that is diffraction-limited at f/2.8 is probably impossible.

Let’s summarise the discussion:

1. Lenses are not perfect. Aberrations will cause the light rays to miss the focal point thus resulting in loss of sharpness.
2. Lens performance improves as you stop down the aperture.
3. Diffraction is a property of light that forces it to change direction when passing through holes. This causes light rays to miss the focal point thus resulting in loss of sharpness.
4. Diffraction is always present and worsens as you stop down the aperture.
5. A lens is diffraction-limited at a given aperture if the effects of aberrations are less pronounced compared to the effects of diffraction at that aperture.

That’s it for now. In the next article, we will discuss the effects of lens aberrations and diffraction on sensors.

Advertisements

Understanding the Effects of Diffraction

This post is a continuation of the previous article that I wrote about resolution and diffraction. I highly suggest that you read that one first so that you will gain a basic understanding of these concepts.

One thing that a lot of people still fail to understand is the absolute effect of diffraction on image resolution. A common argument of buying a higher megapixel camera is that it would “always” resolve more detail than a lower megapixel camera. That is true but only until you hit the diffraction limit. For example, a full frame camera shot at f/16 will not resolve any detail higher than 8Mp. That is, a 36Mp D800 will not give more details compared to a 12Mp D700 when both are shot at f/16. They both will have an effective resolution of 8Mp only.

To explain this, let us consider a very simple analogy. Notice that when you are driving at night in complete darkness, it is very difficult to distinguish if an incoming vehicle is a small car or a big truck if you were to judge only by their headlights. This is because the apparent separation between the left and right headlights is very dependent on the distance of the vehicle from your position. The headlights seem to look larger and closer together the farther the vehicle is from you. If the vehicle is far enough, both headlights will seem to merge as if there is just one light and you would think it’s a bike instead of a car. The reason is simple: light spreads. Both left and right headlights spread until they seem to merge and by then they become indistinguishable from each other. Diffraction is the same. Diffraction spreads light and you lose the details. Therefore it doesn’t matter if you have two eyes or eight eyes like a spider, you still won’t be able to distinguish two separate headlights if the incoming vehicle is very far. In this case, eight eyes are no better than two eyes. Both sets of eyes still see only one headlight not two. Think of the “number of eyes” as your sensor resolution. It does not matter if you have 8Mp or 2Mp, both cameras will detect only one headlight. Did the 8Mp lose resolution? No. It remained a 8Mp sensor. Did it manage to detect two headlights? No. Therefore in our example, a 8Mp is no better than 2Mp in resolving the number of headlights.

The point is that diffraction destroys details. When there is nothing to resolve, sensor resolution does not matter. Supposing that you have two lines that are very close together, diffraction will spread both lines such that they will appear to merge as if they are just one big line. If you only have one line to resolve it does not matter if you have a 2Mp camera or a 100Mp camera, both will detect only one line. The 100Mp camera will of course have more samples of that single line but it is still just one line. Diffraction does not affect sensor resolving power but it affects how the subject is presented to the sensor. Diffraction blurs the subject in such a way that it limits what the sensor can fully detect.

With that in mind, let us look at practical examples. For a full frame sensor, diffraction at f/8 is enough to blur the subject such that anything higher than approximately 30Mp will not resolve any more details. For each stop, the effective resolution drops by half so at f/11 the limit is 15Mp and at f/16 it’s 8Mp and at f/22 a measly 4Mp. These numbers are just approximations and assume that you have a perfect lens. The reality is much lower than those values.

How about smaller sensors like APS-C or m43? The decrease in resolution is proportional to the crop factor. So an APS-C shot at f/8 will only have a maximum effective resolution of 15Mp while m43 will have 8Mp and so on.

Here are MTF graphs for a Nikon 50/1.4 lens comparing a 16Mp D7000 (crop sensor) with a 36Mp D800 (full frame) at f/5.6 and f/16 respectively. Notice that the resolution at those settings are very similar.


So what are the implications? If you are a landscape photographer with a 36Mp Nikon D800 and you shoot at f/8 or f/11 or maybe f/16 to gain enough depth of field you are basically wasting disk space. At f/8, your 36Mp sensor is no better than a 30Mp sensor. At f/11 it’s no better than a 16Mp D4. At f/16 it is no better than a very old 12Mp D700. So a 36Mp sensor shot at small f-stops is not able to capture enough details and yet the image size remains the same and consumes 36Mp of disk space. If you shoot at f/16 for example, you are better off shooting with a 12Mp D700. If you want to print as big as a 36Mp camera then upsize your 12Mp image in Photoshop to an equivalent of a 36Mp image. Of course the upsized image will not gain any details but it doesn’t matter because the 36Mp hasn’t resolved any more details anyway.

A related analogy is that of scanning photos. Good prints are usually done at 300dpi. When scanning photos, it does not make sense if you scan higher than that because you won’t gain anything. Scanners are capable of 4800dpi or even 7200dpi and maybe higher. If you scan a print at 7200dpi you will get a really huge image but with no more detail than when you scanned it at 4800dpi or lower. You could have just scanned it at 600dpi and you won’t notice any difference. The 7200dpi scan is a waste of time and disk space.

Another common argument is that a sensor with lots of megapixels allows more cropping possibilities. Again, that is true only if you are not diffraction limited. Otherwise you could just shoot with a lower Mp camera, upsize the image and then crop and it will make no difference in terms of details.

This is why I have absolutely no interest in the D800 and other insanely high Mp APS-C cameras like the D7100 and K-3 and A6000. I shoot mostly landscape. I stop down to f/11 and sometimes even to f/22. At those f-stops these cameras are just a waste of space, time and processing power. Again, a 36Mp full frame camera does not make sense unless you shoot mostly wide open at f/5.6 and wider. A 24Mp APS-C is stupid unless you mostly shoot at f/5.6 and wider. Manufacturers keep increasing sensor resolution instead of improving noise performance because most photographers are gullible. Megapixels sell.

Having said that, do not be afraid to shoot at smaller f-stops if the shot calls for it. Even 4Mp effective resolution is a lot if you print at reasonable sizes. And since most people never print at all, 4Mp for web viewing is GIGANTIC!

For a more comprehensive explanation of the effects of diffraction refer to this article: http://www.luminous-landscape.com/tutorials/resolution.shtml

Shoot and shop wisely. πŸ™‚

The First 100

20140626-213418-77658195.jpg

Welcome to my 100th post!

To be honest, I didn’t expect to get this far with blogging. I have attempted to start writing in several other sites before but I never really got motivated to move forward with them. I am a computer geek by profession and I spend most of my day in front of a computer managing Linux servers scattered all over the world. In my spare time I customise Linux distributions for my own workstation needs. I’m not really sure what kept me writing this time around. It’s probably because I decided to cover photography instead of the usual computer-related topics.

I’m relatively new to photography. I formally started back in April of 2009. I still remember my first photoshoot session. Me and a friend started driving at 4AM to get to our destination before sunrise. It was then that I learned that to capture a good shot you will have to make some sacrifices, like sleep for example. I remember shooting every weekend for several months from 5-9AM and going home with a thousand frames with no keepers. I didn’t really understand photography back then. I mean, I still don’t understand most of it now but back then I was practically clueless. The most important thing that happened for me was getting hooked in this hobby and I have been shooting ever since.

At this point I would like to thank the beginners in photography for asking those (sometimes silly) questions in forums. They were my sources of ideas for articles. A special thanks as well to those who keep on spreading nonsense — you inspire me to write some more.

Some of you might notice that a few of my articles are quite controversial. The most popular ones were those that attempted to debunk the myth of full frame superiority namely:

1. https://dtmateojr.wordpress.com/2014/05/19/megapixel-hallucinations/
2. https://dtmateojr.wordpress.com/2014/03/08/debunking-the-myth-of-full-frame-superiority/
3. https://dtmateojr.wordpress.com/2014/06/10/debunking-the-myth-of-full-frame-superiority-part-2/

I feel that it is my duty to educate those who are new to photography. The biggest problem at the moment is that photography has become a contest of who has got the largest camera and fastest lens. Beginners feel inadequate just because they don’t own a full frame or a prime lens. It’s not just gear but starters are also made to feel incapable just because they are shooting in auto mode or in JPEG. Armchair photographers have set up artificial walls that prevent beginners from enjoying and moving forward with photography: Your small camera isn’t good enough; Learn to shoot in manual mode; You will not get far with only a kit lens. No wonder only a very few of them continue with their hobby. This kind of bullshit has to stop.

I have chosen NOT to write about topics that everyone will just agree with. If everyone will just agree with me then what’s the point in writing? You might as well go to any forum and drink the kool aid. Instead, I write about the advantages of smaller cameras, your cheap kit lens, why you might want to shoot in JPEG or why you should learn to shoot instead of dealing with a lot of nonsense.

There are times when I feel like writing something highly technical but in a simplified way. My background is in Physics and I understand that not everyone is comfortable with numbers. The topics that I covered were not the usual stuff that everyone knows but instead I discussed the most commonly misunderstood concepts that most people think they already know by heart such as resolution and exposure.

I would also like to apologise to those who felt uncomfortable with the tone of my articles. Rest assured that they were not aimed at you unless you were one of the idiots in forums who called me stupid for using physics and math to prove that you are a moron for believing and spreading that bullshit. You know who you are and it feels good to be vindicated. Thanks for the free publicity.

If you got this far, thanks for reading. I can’t wait to write some more. I actually have a list of topics in the queue already. I’ll talk about the camera that I recently won in the Olympus Asia-Oceania Grand Prix photo contest in my next post so stay tuned.

πŸ™‚

Megapixel Hallucinations

If you are here to understand (why) equivalence (is wrong) then read this: https://dtmateojr.wordpress.com/2014/09/28/debunking-equivalence/

This post is practically a continuation of one of my controversial posts on debunking the myth of full frame superiority. In that previous post I discussed why full frame is actually no better than it’s crop sensor counterpart (Nikon D7000 vs D800) in terms of light gathering capability. Now I will try to discuss another aspect of full frame superiority and explain why it leads people to believe that it is superior to smaller sensor cameras when in fact it is not.

A common source of sensor performance data is DXOMark. This is where cameras are ranked in terms of SNR, DR, Colour depth, and other relevant aspects of the equipment. It is important to note that data from this website should be properly interpreted instead of just being swallowed whole. This is what I will try to cover in this post.

One of the most highly debated information from DXOMark is that of low light performance which is measured in terms of Signal to Noise Ratio (SNR). SNR is greatly affected by the light gathering capacity of a camera’s sensor and this is why this is commonly used to compare the low light performance of full frame and crop sensors. This is also the most misinterpreted data by full frame owners. They use this information to justify spending three times as much for practically the same camera. Let’s see why this is wrong…

Consider the following SNR measurements between the Nikon D7000 and D800:

Image

Isn’t it quite clear that the Nikon D800 is superior to the D7000? Did I just make a fool of myself with that “myth debunking” post? Fortunately, I did not πŸ™‚ I’m still right. That graph above is a normalised graph. DXOMark is in the business of ranking cameras and that is why they are forced to normalise their data. Let’s have a look at the non-normalised graph to see the actual SNR measurements:

Image

Didn’t I say I was right? πŸ™‚

The Nikon D7000 and D800 have the same low light performance! That is because they have the same type of sensor. The D800 is basically just the D7000 enlarged to full frame proportions. Simple physics does not lie. A lot of “photographers” have called me a fool for that “myth debunking” post. Well, I’m not in the business of educating those who are very narrow-minded so I will let them keep believing what they believe is true. But some of us know better, right? πŸ™‚

Let’s not stop here. Allow me to explain why the normalised graphs are like that.

Let me tell you right now that DXOMark is unintentionally favouring more megapixels. That’s just the inevitable consequence of normalisation. Unfortunately, those who do not understand normalisation use this flaw to spread nonsense. The normalised graphs are not the real measured SNR values but are computed values based on a 8Mp print size of approximately 8×10. The formula is as follows:

nSNR = SNR + 20 x log10(sqrt(N1/N2))

where nSNR is the normalised SNR, N1 is the original image size and N2 is the chosen print size for normalisation. In the case of the Nikon D800, N1 = 36Mp and for the D7000, N1 = 16Mp. They are both normalised to a print size of N2 = 8Mp. Based on that formula, the D800 has a SNR improvement of 44.93 up from measured SNR of 38.4. The D7000 though only improves a tiny bit to 41 up from 38. As you can see, although both cameras started equally, the normalised values have now favoured the D800.

This increase in SNR is not because the D800 has better light gathering capability. This apparent increase in SNR is due to downsampling. It’s due to the larger image size and not because of better light gathering capability. Unfortunately, this computed SNR is what the full frame fanbois are trying to sell to uninformed crop sensor users. It is the REAL measured SNR that matters and we will learn later on how important this is compared to just more megapickles.

Go back to that normalisation formula and note the term inside the square root (N1 / N2). Note that if N1 is greater than N2 then the log10 becomes a positive number and the whole term adds to the measured SNR. The term drops to zero for N1 = N2 and that’s why when a D800 image is printed at 36Mp, the SNR is the measured SNR. Same goes for the D7000 when printed at 16Mp. That is why when I blogged about noise performance comparisons I kept repeating that images should be printed at their intended sizes. That’s the ONLY fair comparison. Downsampling is cheating. You do not want to buy a 36Mp camera so you could print it at 8×10. That is an absolute WASTE of money.

The idiots will of course justify by saying “well the good thing with having a larger image is that you can downsample and it will outperform a smaller image“. Well not so fast, young grasshopper. That is not true. We know that SENSEL size generally results in better light gathering capacity (Rain Can Teach Us Photography) although this means smaller image size. Let’s consider the D800 vs D4:

Image

So the real SNR shows the D4 (42.3) being superior compared to the D800 (38.4). Again, when normalised to a 8Mp print, the D800 somehow “catches up”:

Image

Unfair isn’t it? Well, only for smaller prints. Using the same formula to compute the SNR in a 16Mp print, the D4 drops to its real measured SNR of 42.3 while D800 SNR drops to 41.92. So now the D800 is inferior to the D4! How about for a 36Mp print? The D4 drops to 38.77 and the D800 drops to its real measured SNR of 38.4. The 16Mp D4 upsized to a whooping 36Mp print BEATS the D800 in its own game!!!

In the comparison above between two full frame cameras we see that even if the total amount of light, which is proportional to the sensor size, does not change, variations in SNR can occur if resampling is added into the equation. Clearly, total light and resampling are unrelated. Just because one sensor has better noise performance at a given print size does not imply that it has better light gathering capacity. If 8Mp was the only print size we could make, one would think that the D800 is every bit as good as the D4. This is clearly not the case at larger print sizes where the D4 outshines the D800. The same argument can be said for comparisons between sensors of different sizes. Sensor performance should not be judged based on arbitrary print sizes. Sensor performance must be taken at the sensor level. 

Think about it: every time you print smaller than 36Mp, you are WASTING your D800. Who consistently prints larger than 16Mp or even 12Mp? As you can see, the superior 16Mp sensor makes a lot more sense. The D800 is a waste of space, time, and money.

In essence, a 16Mp sensor, be it full frame or crop can beat the 36Mp D800 if it has high enough SNR. The crop sensor need not match the superior D4 sensor. A 16Mp crop sensor with the same SNR performance as the 7-year old Nikon D700 will beat the D800 at print sizes of 16Mp and higher.

Let’s summarise what we have covered so far:

0. DXOMark data needs to be analysed. Better SNR performance in normalised data does NOT imply better light gathering capacity of full frame sensors but merely a consequence of larger image size in terms of megapixels.

1. DXOMark normalises their data because they are in the business of ranking cameras.

2. Normalisation to a small print size unintentionally favours sensors with more megapixels.

3. More megapixel does not necessarily lead to superior SNR when downsampled.

4. At larger prints (16Mp and higher), the weakness of the 36Mp D800 sensor begins to show.

5. A good quality crop sensor camera with lesser megapixels can beat a full frame camera with insane megapixels.

Do you believe me now?

Resolution and Sharpness

Once in a while I choose to write something technical about photography. In this post, I will discuss the science behind resolution and how it relates to sharpness and details. I will also try to explain the very common misconceptions. My background is in Physics where I concentrated on optics and digital signal processing. I won’t try to go deep with the maths but instead, explain it in such a way that even those that fear numbers will understand. 

First things first. When photographers say resolution, they usually mean the number of megapixels in a camera. For example, a 36Mp camera has a higher resolution compared to a 24Mp camera. This is a very simplistic way of looking at things. The assumption is that a 36Mp camera is capable of capturing more details given the same subject. Let’s see why this is not always true:

Supposing that you are photographing very fine parallel lines with a gap of 1mm between them. If the pixel size of your sensor is 2mm wide (yeah, that’s really HUGE), it won’t be able to detect two adjacent lines from each other because it is too large — it will “step” over two lines at a time and detect them as if they are just one big line. The number of lines your sensor will detect will be less than the total number of lines in the actual subject. In fact, it will detect less than half the number of lines. If your pixel is 1mm wide, it will definitely detect more lines but not all of them (i.e. a pixel that falls between the gaps will not detect any line at all). We call this process of adjusting the width of the sensor pixels as sampling. According to physics, in order to detect all the lines, we must sample at least twice as much as whatever details we need to detect in our subject, meaning our sensor pixel width must be 1/2mm (0.5mm) or smaller. The capacity to detect individual lines is called resolving power. When we undersample (bigger sensor pixels), we lose details which results in blurrier image. We call this blurring effect aliasing.

From the example above, it is quite obvious that given a fixed sensor area, we need to have smaller, tighter packed pixels to have better sampling rate. To cram 36 million pixels in a full frame sensor, the pixels have to be smaller compared to cramming (only) 24 million pixels into the same full frame sensor area. The 36Mp sensor, by virtue of the smaller pixels, has better sampling and thus result in more details. We therefore say that the 36Mp sensor is capable of more resolving power vs the 24Mp sensor. The 36Mp sensor has higher pixel density (number of pixels per area) than the 24Mp sensor. Higher pixel density means more resolving power. 

Now here’s why this isn’t always true in photography. In the case of the 36Mp and 24Mp sensors, we are comparing two full frame sensors. Not all sensors are full frame though. APS-C and m4/3 sensors are smaller. A 16Mp APS-C sensor (in the case of Nikon D7000 or Pentax K5) contains pixels that are exactly the same size (width) as a 36Mp (Nikon D800) full frame sensor. They have the same pixel density. Therefore the 16Mp sensor has the same capacity to resolve details like its 36Mp big brother! They have the same resolving power! It also follows that the 16Mp APS-C has better resolving power than the 24Mp full frame sensor!

A bit of explanation is required when comparing the resolving power of sensors with different sizes. As we have mentioned, a 16Mp APS-C has more resolving power than a 24Mp full frame. The assumption is that everything else is constant. That is, they are using the same lens (i.e. same focal length). The same focal length results in the same optical magnification (another factor that affects resolution) although this would result in a narrower angle of view for smaller sensors. If you keep the angle of view constant, a 24Mp full frame will of course resolve more detail compared to a 16Mp APS-C.

Now before you decide to replace your 24Mp Canon 5D3 with an 18Mp Canon 7D, read further because the story does not end here. Sensor resolution is not everything.

Let’s properly define resolution first:

1. Resolution or resolving power is the capacity to detect details.

2. Image size is NOT necessarily equivalent to resolution. We have seen from the example above that a 16Mp APS-C sensor has better resolving power than a 24Mp full frame sensor even if the latter has a bigger image size.

Make sure you understand the concepts above. Photographers mistakenly attribute megapixels as resolution. They are not the same and as our 16Mp vs 24Mp example suggests, it is downright wrong! What megapixels will definitely tell you is that a higher Mp will result in a bigger image in terms of dimension and disk space usage ALWAYS.

Let’s continue. Why should you not replace your 24Mp full frame 5D3 with a 18Mp 7D? Because pixel density is not everything. There is another factor involve in resolution and that’s your lens. I am not talking about glass quality per se. I am talking about lens aperture.

Assume for a moment that all lenses are created equal. That they are perfectly “sharp”. Here’s how aperture comes into play. You should know by now that an aperture of f8 has a smaller opening than f4. Here’s the problem: light “bends”. It bends when it passes through small openings. We call this diffraction. The tighter the opening, the more light diffracts. It is like a hose with running water. If you squeeze the hose, the water will spread (bend). Squeeze it tighter and the water spreads even more.

How does diffraction affect resolution? Well, it “spreads” the details. In our line example above, instead of the lines being 1mm apart, because of diffraction the lines become thicker and thus the gaps become narrower. It will come to a point where individual lines appear to merge and two lines become one. As diffraction worsens, four lines become one and so on.

In photography, as your aperture becomes narrower diffraction worsens. For a full frame sensor for example, diffraction at f11 already worsens in such a way that the maximum resolving power is equivalent to about 16Mp. It means that even if your sensor is 36Mp, the lines have already spread in such a way that the sensor is no longer capable of detecting every single one of them. Your 36Mp sensor has now dropped in resolution as if it only 16Mp. If the aperture drops further to f22, diffraction becomes so bad that most of the lines would have already merged such that your 36Mp full frame sensor has now dropped to the equivalent of a 4Mp sensor!!! To fully utilise a 36Mp full frame sensor, one has to shoot at f5.6 or wider. At f5.6, a perfect lens is capable of fully utilising a 60Mp full frame sensor. At f8, around 30Mp. We therefore say, that the Nikon D800 36Mp sensor is diffraction limited at f8. Compare this with the Nikon D700 that is diffraction limited at f11 because it only has a 12Mp sensor. 

So what now? Well it means that if you are shooting with a D800 doing landscape which typically uses apertures of f11-f22 for maximum depth of field, you are just wasting disk space. Your image size remains the same but you are not really resolving any more detail. Again, remember that image size is a direct result of the number of pixels. It does not change. Resolution however is affected by aperture. If you are shooting at f22 with a D800, you are no better than shooting with an older D700 at the same aperture. Both will have the equivalent resolving power of 4Mp. In this case, you can use Photoshop to resize your D700’s 12Mp image to arrive at 36Mp without losing or gaining anything vs the D800’s image. At f11 and lower, the D800 has no advantage whatsoever against the D700 in terms of resolution. Further to that, the D800 has the disadvantage of being a waste of disk space.

Ok, let’s go back to why you should not replace your 5D3 with a 7D (yet). Let’s use our water hose analogy again. Think of the APS-C sensor as a bucket and think of the full frame sensor as a much bigger bucket. Point the hose with running water at them. Now squeeze the hose. Notice that the smaller bucket will soon not be able to catch all of the spreading water. Some of the water coming from the hose will now miss the smaller bucket. The bigger bucket will still be able to catch all of the water at the same “squeeze strength”. We can say that the smaller bucket has lost some water (resolution). This is also true with photography. This is why landscape photographers who shoot with large format film cameras can shoot at f64 with very sharp images. You can’t do this with your measly full frame. This is also why pinhole photography with 135 film sucks. So how about that APS-C sensor? At f22, an APS-C sensor is only capable of 2Mp maximum resolution vs 4Mp with a full frame sensor. Does this mean that APS-C is inferior to full frame? Not necessarily. If you shoot with APS-C you only need to shoot at f16 to get the same depth of field as a full frame sensor at f22. They just balance out really. 

Conclusion:

Aperture affects resolution in ways that most photographers are not even aware of. In the case of the Nikon D800, the 36Mp sensor is just a waste of disk space starting at f11. Whether this really affects the final output depends on how close you want to pixel peep or how big you print. Fact is, at tighter apertures a D700 at 12Mp, with a bit of Photoshop magic, is as capable as a D800. It may even surpass the D800 by virtue of its better light gathering properties due to its larger pixels.  

I hope this post has made you think. Until next time.

Update:

Thanks to marceldezor for the link (http://www.talkemount.com/showthread.php?t=387).
The shots in there are very good examples of the effects of diffraction. Notice that the 4Mp at f5.6 is showing more details (resolution) than the 16Mp at f22. The latter has been reduced to effectively 2Mp. In this case, the 4Mp f5.6 shot can be upsized to 16Mp in Photoshop and it will produce a better print than the 16Mp f22 image. The 16Mp shot is practically no better than a 4Mp camera at very small apertures.