Understanding the Effects of Diffraction (Part 2)

This article is a continuation of my previous post on understanding the effects of diffraction. That article has caused a long-winded discussion because some people decided to go deeper in the discussion of diffraction without fully understanding some fundamental concepts. Add to that some bogus resolution graphs and the discussion went from bad to shite.

In the interest of further learning, let’s go back to the very basic principles behind lenses and light.

LENS ABERRATION

The main purpose of a photographic lens is to focus light into the camera’s sensor. Ideally, an incoming point light source is projected into the sensor as a point. The reality is not quite that simple. Light rays near the center of the lens just pass straight through the glass without any problems. However, light rays that do not pass through the center will have to bend so as to meet with the other light rays towards the same focal point. The farther the light ray is from the center, the sharper it has to bend. The problem here is that lenses are not perfect. These imperfections or aberrations result in imprecise bending of light. Light rays near the edges of the glass don’t quite hit the focal point. Some of them will fall just before the sensor and some of them will fall after the sensor. The point light source then is projected into the sensor no longer as a point but something that is much larger. Refer to the simple illustration below. The red ray hits the focal point, the blue ray almost hits the focal point but the green ray which is very near the edge totally misses it.

Screen Shot 2014-10-21 at 8.29.24 pm

There are ways to work around lens aberrations. The most common method is by closing down the pupil to eliminate light rays that are near the edges of the lens. In photography, this is what happens when you close down or “stop down” the aperture. In the illustration below, the narrow pupil has eliminated the out-of-focus green ray leaving only the red and blue rays that are more focused.

Screen Shot 2014-10-21 at 8.27.30 pm

The result is a smaller projected point that is truer to the original point source. The overall image that is projected into the sensor will look sharper. The lens’es performance has therefore improved by utilising only the center of the glass by closing down the pupil. The downside though is that since the pupil has eliminated other light rays, the resulting image will also look darker. Bottom line is that you will have to trade sharpness with brightness.

DIFFRACTION

As discussed above, closing down the pupil improves the performance of the lens. You can make the pupil as narrow as you want and the lens performance will improve proportionally.

There is a problem though that is not quite the fault of the lens itself. This problem is attributed to a property of light. Light changes direction when it hits edges or when it passes through holes. This type of change of direction is called diffraction. Diffraction is ever present as long as there is something that is blocking light. So although a narrower pupil improves lens performance, light goes out-of-control when it passes through a narrow opening. The narrower the pupil, the more that light changes direction uncontrollably. It’s like squeezing a hose with running water. The tighter you squeeze, the wider the water spreads. In the end, light rays will still miss the focal point and we are back to the same dilemma where our point light source is now projected at a much bigger size on the sensor.

DIFFRACTION-LIMITED LENS

We are now ready to understand what a diffraction-limited lens means.

Recall that depending on the size of the pupil, light rays that are farther away from the center of the lens will miss the focal point thus causing a point light source to be projected much larger on the sensor. Let’s assume for now that this point source is projected with a much larger diameter, X, on the sensor.

Now forget for a moment that the given lens has problems and is perfect with no aberrations whatsoever. Recall that at the same pupil size, light diffracts (spreads) in such a way that will cause some of the light rays to miss the focal point and again resulting in a larger projected point of diameter Y.

So now we have two different sizes of the projected point: size X caused by lens aberrations and size Y caused by diffraction (assuming that the lens was perfect).

If X is smaller than Y then the lens is said to be diffraction-limited at that pupil size or aperture. This means that the main contributor to image softness is diffraction instead of lens imperfections. The optimum performance of the lens is the widest aperture in which X remains smaller than Y. Simple.

If X is larger than Y, the problem becomes a bit more complicated. It means that lens imperfections are more dominant compared to diffraction and therefore you can choose to make the aperture narrower to improve lens performance. Stopping down will of course decrease X but will increase Y. It becomes a delicate balancing act between lens imperfection and diffraction. This is a common problem with cheap kit lenses. At larger apertures, kit lenses have aberrations so bad that the image they produce look soft. So you stop down to f/8 or f/11 and by then diffraction kicks in causing the image to soften. It’s a lose-lose situation. That is why premium lenses are expensive. They are sharp wide open where diffraction is very negligible.

A lens that is diffraction-limited at f/5.6 is considered very good. A lens that is diffraction-limited at f/4 is rare. A lens that is diffraction-limited at f/2.8 is probably impossible.

Let’s summarise the discussion:

1. Lenses are not perfect. Aberrations will cause the light rays to miss the focal point thus resulting in loss of sharpness.
2. Lens performance improves as you stop down the aperture.
3. Diffraction is a property of light that forces it to change direction when passing through holes. This causes light rays to miss the focal point thus resulting in loss of sharpness.
4. Diffraction is always present and worsens as you stop down the aperture.
5. A lens is diffraction-limited at a given aperture if the effects of aberrations are less pronounced compared to the effects of diffraction at that aperture.

That’s it for now. In the next article, we will discuss the effects of lens aberrations and diffraction on sensors.

Advertisements

40 thoughts on “Understanding the Effects of Diffraction (Part 2)”

  1. This was one of your better works as it is not entirely rubbish 🙂

    “A lens that is diffraction-limited at f/2.8 is probably impossible. ” – hardly as I own at least one such lens. You might want to provide some evidence instead of presenting wild speculation as fact.

    “Diffraction is a property of light that forces it to change direction when passing through holes” – please read Wikipedia article on diffraction before writing something this silly.

    Also, please do not use the word “pupil” without specifying wether you mean the entrance pupil or the exit pupil as they’re different.

    Your idea of lens aberrations is vastly overly simplified – study: http://toothwalker.org/optics.html

    Also, if the measurements conflict with your hypothesis (and they do – why not just check out lens tests out there?), maybe you should do some more thinking if your hypothesis is correct or not instead of trying to force reality to fit your idea and not the other way around.

    If a 24MP sensor (sans AA-filter) would turn out 18MP at f/16, your hypothesis is way off (88% of Nyquist, it doesn’t matter what processing gets this as no new information is being invented). For starters, you didn’t consider the color filter array of cameras in your hypothesis. Nor did you define the criterion for resolution well – one common one in use is the Rayleigh-criterion. I think the key to your misunderstanding regading this topic is that you haven’t quite grasped that the lens draws contrast, not pixels or resolution.

    1. I am not changing a thing. What I wrote here is as it is. It’s a simplified explanation meant for PHOTOGRAPHERS not physics students. Photographers should be concerned about taking photos and not measurebating. The goal here is to understand, in the simplest way possible, how aperture might affect lens performance.

      BTW, there are several ways that light might change direction when hitting a medium. One is diffraction (discussed above) the other is refraction and the most obvious is reflection. Did you fail elementary physics?

      My articles are already very simplified and yet you and the other guy still fail to understand them. And now you want me to have a more complicated explanation? ROFL! You are funny. If you google, you will find what you are looking for. My blog is not a physics blog. It’s meant for photographers. Why don’t you go out and learn to shoot instead?

    2. Let me throw the question back at you. You, along with some idiots in DPR, believe that f/16 is capable of resolving up to 90Mp. How is it then that you are only getting 18Mp? Even if you stop down to f/22 you should get at least 40Mp or at f/32 20Mp IF, a very big IF, you are correct. So your resolution at f/32 should at least match your data at f/2.8, right? I’ll eat a bat if you can show me that f/32 or even f/22 is as sharp as your f/2.8.

      1. My camera’s sensor’s got 24MP, so how could it record 90MP?
        And I never said that f/16 on my system resolves more than f/2.8 on my system. But the difference is quite small with suboptimal deconvolution.

        The key to your misuderstanding is that you don’t seem to realize that the lens only gives you contrast which gradually goes down with stopping down the lens. Even without deconvolution a full frame can resolve much more at f/16 than you think, and with deconvolution even much more than that.

        What is your criterion for separation of points? A quick calculation would suggest that your criterion is that the pixels should be fully separate (for the first zero of the Airy disk function), right?

        There are problems with that:

        1. The finest sampling in a sensor using Bayer CFA is for green light which is sampled at every other pixel, thus you should adjust the size of the distance of minimum separation by factor of about 1.41 (as that’s the shortest distance from one green to the next – for reds and blues the factor is 2).

        2. That criterion would be way too strict if your intention is to figure out the maximum resolution. There is no need to the close to 100% contrast between pixels. Much less is needed. Please google for Rayleigh criterion – that’s the most commonly used criterion, however it doesn’t give the maximum as if we seek for the maximum possible resolution in the system, we must assume large signal level (ie. good light), so we can easily go well beyong Rayleigh limit, and even byond Dawes limit.

        3. Deconvolution allows for restoring the original pre-diffracted signal to a degree. If we knew the point spread function fully, we could restore the image perfectly, but that is never the case in practise. Still, deconvolution increases the maximum resolution plenty.

        Also, I find your comment absurd when you say “Photographers should be concerned about taking photos and not measurebating” – you try to measurebate, but you make big errors on that. Maybe you should be taking your own advice and concentrate on taking photos – I’m sure you’re quite good at that. Measurebating has not been too successfull for you so far.

      2. Did you read the LL article I referenced here? That’s the basis of my discussion. Simple as that.

        You are saying that all existing cameras practically should look sharp at the lens’ peak aperture up to f/22 because at that aperture it should be capable of at least 40Mp. Max res at the monent is 36Mp for D800. Show me sample shots from a D800 at f/5.6 and f/22. They should be exactly as sharp on both apertures. Show me and I will bow down to you.

  2. “If X is smaller than Y then the lens is said to be diffraction-limited at that pupil size or aperture. This means that the main contributor to image softness is diffraction instead of lens imperfections.
    The optimum performance of the lens is the widest aperture in which X remains smaller than Y. Simple. ”

    Gee that sounds like this
    Here to stay
    “The resolution of a lens is limited by diffraction and by aberrations.
    The best resolution you can obtain from a lens is at the point where the blur from aberrations is equal to the blur you get from diffraction (stopping a lens down to the point just before you see diffraction)

    Blur from aberrations-> optimal resolution <-blur from diffraction"

    and again here this sounds the same
    "Clearly you don’t understand that the max. resolution that a lens can project is just before the point of it showing diffraction."

    So I ask what has changed as you have over and over stated this as incorrect?
    https://dtmateojr.wordpress.com/2014/10/09/understanding-the-effects-of-diffraction/#comment-518
    https://dtmateojr.wordpress.com/2014/10/09/understanding-the-effects-of-diffraction/#comment-521

    This question submitted to LL had nothing to do with this sudden reversal
    http://www.luminous-landscape.com/forum/index.php?topic=94395.0

    1. Diffraction need NOT match lens performance for max resolution. If a lens is good enough at f/4 then it will peak at f/4 even if diffraction is very negligible.

      So in your stupid graph, the same lens than peaked at f/5.6 on the D800 should peak at f/5.6 on any full frame camera. That’s why your graphs are bogus. And yet you tried so hard to defend why the D700 peaked at f/8.

      Go back to LL and learn some more. Don’t come back until you got hit by a big clue bat.

      1. The size and separation between Airy disks imposes a particular sampling interval, this is, a pixel spacing or pixel pitch. When the pixel is too big some detail is lost and the system is resolution-limited. If the pixel is too small the system doesn’t resolve more detail, and it is diffraction limited. It seems to be a minimum contrast threshold, and therefore a minimum disk separation, which translates to a maximum resolvable signal frequency and a minimum pixel pitch.

      2. Continue that line of thinking. I’ll give you a hint: compare the sensel size to the airy disc size at f/5.6 then at f/8 then see if the resolution is limited by the system or by diffraction.

  3. The maximum resolvable signal frequency is at the point the system can resolve the smallest Air Disk and that my friend is just before the image your capture starts to show diffraction Now tell me does the D800 show diffraction earlier than the D700 when stopping down.

    1. Here to stay ”
      The maximum resolvable signal frequency is at the point the system can resolve the smallest Air Disk and that my friend is just before the image your capture starts to show diffraction Now tell me does the D800 show diffraction earlier than the D700 when stopping down.”

      This is correct, I would look at it from this stand point we have blur from sensor resolution, diffraction and aberrations. Where all 3 shares the same amount of blur in the image is where the system peaks.

      1. The system will be limited by the worst contributor. If the lens is perfect and there is no diffraction then the system is limited only by sensor resolution. If a lens is diffraction limited at f/5.6 where a theoretical max res of 60Mp is achievable, every full frame camera will peak at that f-stop for that lens. In the case of a D700 where sensor resolution is less than theoretical max resolution at f/5.6 to f/11, we expect the MTF graph to be flat at maximum of 12Mp from f/5.6 to f/11 (possibly a slight dip).

        Read the analogy on scanning 300dpi prints. Scanning at 600dpi or 7200dpi will not give any difference in results. In the same way, we expect both D700 and D800 to peak at f/5.6 because both systems are limited by sensor resolution. No full frame camera has exceeded 60Mp yet so any diffraction limited lens at f/5.6 should peak at that aperture for every camera.

        This is common sense.

    1. Ignored.
      Go back to LL and explain to them why the D800 peaked at f/5.6 and the D700 at f/8 for the same lens. Let’s see who will look like a clown. 🙂

      1. I am pretty sure they would agree with me as it is their graph

        The size and separation between Airy disks imposes a particular sampling interval, this is, a pixel spacing or pixel pitch. When the pixel is too big some detail is lost and the system is resolution-limited. If the pixel is too small the system doesn’t resolve more detail, and it is diffraction limited. It seems to be a minimum contrast threshold, and therefore a minimum disk separation, which translates to a maximum resolvable signal frequency and a minimum pixel pitch.

        You will need a pixel with a diagonal at least as large as the diameter of the Airy disk in order to detect the spot size, ( its position and brightness. Therefore, in theory, 1.4 times the pixel size (the length of the diagonal of the squared pixel) must be equal to the diameter of the Airy disk. This would imply that the pixel diagonal is the diameter of the sensor’s circle of confusion.
        diameter for a diffraction-limited lens.

        Table 2. Minimum Airy Disk diameter and optimal sampling frequency/pixel size for different wavelengths of the light and a diffraction limited lens.

        as shown on this table the smaller Airy Disk that one is able to sample the larger the Sampling frequency ( This is why the D800 peaks earlier than the D700)

      2. Ignored.
        If you are “…pretty sure they would agree …” with you then why don’t you show them your bogus graphs and explain what I think is anomalous? You have come to a tipping point where nothing I say would ever convince you so go to other people that could hit you with a bigger cluebat.
        You are wasting my time.

      3. So did you learn anything or are you ignoring the people at LL as well? Why don’t you give them your explanation as to why your bogus graphs are correct even if they defy physics? Surely, you just can’t give up that easily, no? Or in your own words “…are you man enough or chicken…”?

  4. Here to stay
    “Are you say that a lens resolution is limited by both Diffraction & Aberrations at the same time.
    Would this imply that the greatest resolution from a lens would lie at the point where the blur from Diffraction and the blur from Aberrations are the same?”
    From the looks of it not one person has disagreed with me on this over at LL
    and you my friend have objected on this very description over and over.
    and after the LL thread you go silent on this. IF I have said anything incorrect with the above statement why don’t you correct me in the thread over at LL and see who gets whacked with that bat of your

      1. I have also found another source that shows different pixels sizes change the peak at which we see maximum resolution. And as far as I see no one has disputed that resolution plays a role where we see the system peaks. If you feel that these findings and graphs are bogus why not log on and let everyone know that.

      2. Didn’t LL already tell you that lenses should peak at the same aperture? If you believe otherwise then prove it to them. I can see you are holding off with your comments in that forum. You are the one defying physics so the burden of proof is in your hands.

      3. Nowhere did anyone say it diffraction should peak at the same fstop the closest anyone said was “I have run MTF tests in pixel sizes from 9 my to 3.8 my, and lens performance essentially always peaks at the same aprtures”
        That’s a far cry from should

        Because you feel the need to always call them bogus graphs maybe you should point this out to the authors of the graphs as they are presenting this false data.
        http://www.lensrentals.com



        Or here’s another source using the 50mm F1.8 G on a 5.92 µm µm pixel D3x

        and at 3.39 µm pixel V1

        They peak at a 1 stop difference
        and if you feel that they are also wrong you should write them also

      4. Again you are the only person that has objected to the Graphs
        Not anyone at LL has stated the graphs are bogus
        Again if you feel that they are bogus then please let everyone at
        http://www.luminous-landscape.com/forum/index.php?topic=94394.80
        know
        I am sure that there if they felt the graphs are bogus they would be the first to let me know please show me up and post at LL
        how bogus they are rather than sitting here

      5. Obviously you are, in your own words, “chicken”. Scared of the truth. You are brave only when dealing with me but shit scared at LL.

      6. All the graphs presented here have been shown to LL forum and at this time no one but yourself has protested them as bogus

      7. Did you ask why the D800 peaked at f/5.6 and the D700 at f/8 for the same lens? Better yet, explain it to them. Yeah, explain it to them. That would be more fun 😀

  5. Even this source here they give you why you would see a peak at different f-stop
    “The size and separation between Airy disks imposes a particular sampling interval, this is, a pixel spacing or pixel pitch. When the pixel is too big some detail is lost and the system is resolution-limited. If the pixel is too small the system doesn’t resolve more detail, and it is diffraction limited. It seems to be a minimum contrast threshold, and therefore a minimum disk separation, which translates to a maximum resolvable signal frequency and a minimum pixel pitch.

    You will need a pixel with a diagonal at least as large as the diameter of the Airy disk in order to detect the spot size, its position and brightness. Therefore, in theory, 1.4 times the pixel size (the length of the diagonal of the squared pixel) must be equal to the diameter of the Airy disk. This would imply that the pixel diagonal is the diameter of the sensor’s circle of confusion.

    However, an Airy disk can define a line pair, and you will need two pixels in order to extract this linear information from the spots and to avoid spatial aliasing. Then, the general rule for an optimal sampling is 2 pixels per Airy disk diameter in monochrome sensors, which match the Nyquist rate of 2 pixels per line pair. In practice, higher sampling frequencies don’t increase the resolved detail[10].”

    Please tell me that they are wrong also

  6. The LL article is somewhat sloppy and flawed as it’s way overly simplified – maybe a professor of economics is not an expert at physics…

    BUT also maybe you missed the chart where different criterions are compared – if we’re after maximum resolution as you are, then we should think of the criterion we can use for high quality image (meaning image taken with a good lens and with enough photons to have good contrasty image for the sensor to sample). Thus we can easily choose Dawes criterior (we could use even lower contrast difference).

    Assuming the chart is right, at f/22 Dawes criterion allows 114 lp/mm, ie. 114 line pairs per millimater on the sensor. A full frame sensor is 36 by 24 millimiter or so. This leads to about 45MP!

    However, that is for black and white sensor. With Bayer CFA you need to multiply the 114 lp/mm by 1,414 because the greens are 1.414 pixels aways from each other. Thus we get almost 90MP!

    (Note, the 90MP requires a contrasty subject. If the subject has little contrast, such resolution isn’t reached.)

    Now, this does not take into account deconvolution. The more pixels we have, the more information we have about the pre-convolved image, thus the better deconvolution result we can achieve, thus even higher resolution is there waiting.

    Also, adding more pixels reduces aliasing, including moire and makes the AA-filter unnecessary. Hardly a trivial thing.

    Anyhow, your own article proves the 90MP figure you laughed at, so why not try to calculate yourself. Now we all laught at your stupidity 🙂

    1. Then enough BS and show me 90Mp at f/22. Which part of “show me” do you not understand?

      Now if that seems to be quite impossible (i.e. it’s just plain BS) then show me something that you can do. You claim to have a lens that peaks at f/2.8 (BS!!!) so give me a shot at f/2.8 and then another shot at f/22 and prove to me once and for all that you are correct.

      1. I can’t post to your blog you coward.
        What part of thinking you don’t understand? A bit thick maybe?

        I didn’t say my lens peaks at f/2.8. I just said that it’s diffraction limited at f/2.8. It may well resolve better still at f/2…

        Cowards you hide behind a blog a silly…

  7. Do you not understand what lp/mm is?
    The very article you said told: 114lp/mm at f/22.
    No able to turn that to pixel count? You really are stupid.

      1. I don’t do gay stuff like you, so I’m not showing you anything. Especialyl since I can’t post images to your silly little blog.
        How much is 114lp/mm on full feame sensor? How many pixels? To stupid to able to calculate. Silly boy.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s