Tag Archives: aperture

Understanding the Effects of Diffraction (Part 2)

This article is a continuation of my previous post on understanding the effects of diffraction. That article has caused a long-winded discussion because some people decided to go deeper in the discussion of diffraction without fully understanding some fundamental concepts. Add to that some bogus resolution graphs and the discussion went from bad to shite.

In the interest of further learning, let’s go back to the very basic principles behind lenses and light.


The main purpose of a photographic lens is to focus light into the camera’s sensor. Ideally, an incoming point light source is projected into the sensor as a point. The reality is not quite that simple. Light rays near the center of the lens just pass straight through the glass without any problems. However, light rays that do not pass through the center will have to bend so as to meet with the other light rays towards the same focal point. The farther the light ray is from the center, the sharper it has to bend. The problem here is that lenses are not perfect. These imperfections or aberrations result in imprecise bending of light. Light rays near the edges of the glass don’t quite hit the focal point. Some of them will fall just before the sensor and some of them will fall after the sensor. The point light source then is projected into the sensor no longer as a point but something that is much larger. Refer to the simple illustration below. The red ray hits the focal point, the blue ray almost hits the focal point but the green ray which is very near the edge totally misses it.

Screen Shot 2014-10-21 at 8.29.24 pm

There are ways to work around lens aberrations. The most common method is by closing down the pupil to eliminate light rays that are near the edges of the lens. In photography, this is what happens when you close down or “stop down” the aperture. In the illustration below, the narrow pupil has eliminated the out-of-focus green ray leaving only the red and blue rays that are more focused.

Screen Shot 2014-10-21 at 8.27.30 pm

The result is a smaller projected point that is truer to the original point source. The overall image that is projected into the sensor will look sharper. The lens’es performance has therefore improved by utilising only the center of the glass by closing down the pupil. The downside though is that since the pupil has eliminated other light rays, the resulting image will also look darker. Bottom line is that you will have to trade sharpness with brightness.


As discussed above, closing down the pupil improves the performance of the lens. You can make the pupil as narrow as you want and the lens performance will improve proportionally.

There is a problem though that is not quite the fault of the lens itself. This problem is attributed to a property of light. Light changes direction when it hits edges or when it passes through holes. This type of change of direction is called diffraction. Diffraction is ever present as long as there is something that is blocking light. So although a narrower pupil improves lens performance, light goes out-of-control when it passes through a narrow opening. The narrower the pupil, the more that light changes direction uncontrollably. It’s like squeezing a hose with running water. The tighter you squeeze, the wider the water spreads. In the end, light rays will still miss the focal point and we are back to the same dilemma where our point light source is now projected at a much bigger size on the sensor.


We are now ready to understand what a diffraction-limited lens means.

Recall that depending on the size of the pupil, light rays that are farther away from the center of the lens will miss the focal point thus causing a point light source to be projected much larger on the sensor. Let’s assume for now that this point source is projected with a much larger diameter, X, on the sensor.

Now forget for a moment that the given lens has problems and is perfect with no aberrations whatsoever. Recall that at the same pupil size, light diffracts (spreads) in such a way that will cause some of the light rays to miss the focal point and again resulting in a larger projected point of diameter Y.

So now we have two different sizes of the projected point: size X caused by lens aberrations and size Y caused by diffraction (assuming that the lens was perfect).

If X is smaller than Y then the lens is said to be diffraction-limited at that pupil size or aperture. This means that the main contributor to image softness is diffraction instead of lens imperfections. The optimum performance of the lens is the widest aperture in which X remains smaller than Y. Simple.

If X is larger than Y, the problem becomes a bit more complicated. It means that lens imperfections are more dominant compared to diffraction and therefore you can choose to make the aperture narrower to improve lens performance. Stopping down will of course decrease X but will increase Y. It becomes a delicate balancing act between lens imperfection and diffraction. This is a common problem with cheap kit lenses. At larger apertures, kit lenses have aberrations so bad that the image they produce look soft. So you stop down to f/8 or f/11 and by then diffraction kicks in causing the image to soften. It’s a lose-lose situation. That is why premium lenses are expensive. They are sharp wide open where diffraction is very negligible.

A lens that is diffraction-limited at f/5.6 is considered very good. A lens that is diffraction-limited at f/4 is rare. A lens that is diffraction-limited at f/2.8 is probably impossible.

Let’s summarise the discussion:

1. Lenses are not perfect. Aberrations will cause the light rays to miss the focal point thus resulting in loss of sharpness.
2. Lens performance improves as you stop down the aperture.
3. Diffraction is a property of light that forces it to change direction when passing through holes. This causes light rays to miss the focal point thus resulting in loss of sharpness.
4. Diffraction is always present and worsens as you stop down the aperture.
5. A lens is diffraction-limited at a given aperture if the effects of aberrations are less pronounced compared to the effects of diffraction at that aperture.

That’s it for now. In the next article, we will discuss the effects of lens aberrations and diffraction on sensors.


Understanding the Effects of Diffraction

This post is a continuation of the previous article that I wrote about resolution and diffraction. I highly suggest that you read that one first so that you will gain a basic understanding of these concepts.

One thing that a lot of people still fail to understand is the absolute effect of diffraction on image resolution. A common argument of buying a higher megapixel camera is that it would “always” resolve more detail than a lower megapixel camera. That is true but only until you hit the diffraction limit. For example, a full frame camera shot at f/16 will not resolve any detail higher than 8Mp. That is, a 36Mp D800 will not give more details compared to a 12Mp D700 when both are shot at f/16. They both will have an effective resolution of 8Mp only.

To explain this, let us consider a very simple analogy. Notice that when you are driving at night in complete darkness, it is very difficult to distinguish if an incoming vehicle is a small car or a big truck if you were to judge only by their headlights. This is because the apparent separation between the left and right headlights is very dependent on the distance of the vehicle from your position. The headlights seem to look larger and closer together the farther the vehicle is from you. If the vehicle is far enough, both headlights will seem to merge as if there is just one light and you would think it’s a bike instead of a car. The reason is simple: light spreads. Both left and right headlights spread until they seem to merge and by then they become indistinguishable from each other. Diffraction is the same. Diffraction spreads light and you lose the details. Therefore it doesn’t matter if you have two eyes or eight eyes like a spider, you still won’t be able to distinguish two separate headlights if the incoming vehicle is very far. In this case, eight eyes are no better than two eyes. Both sets of eyes still see only one headlight not two. Think of the “number of eyes” as your sensor resolution. It does not matter if you have 8Mp or 2Mp, both cameras will detect only one headlight. Did the 8Mp lose resolution? No. It remained a 8Mp sensor. Did it manage to detect two headlights? No. Therefore in our example, a 8Mp is no better than 2Mp in resolving the number of headlights.

The point is that diffraction destroys details. When there is nothing to resolve, sensor resolution does not matter. Supposing that you have two lines that are very close together, diffraction will spread both lines such that they will appear to merge as if they are just one big line. If you only have one line to resolve it does not matter if you have a 2Mp camera or a 100Mp camera, both will detect only one line. The 100Mp camera will of course have more samples of that single line but it is still just one line. Diffraction does not affect sensor resolving power but it affects how the subject is presented to the sensor. Diffraction blurs the subject in such a way that it limits what the sensor can fully detect.

With that in mind, let us look at practical examples. For a full frame sensor, diffraction at f/8 is enough to blur the subject such that anything higher than approximately 30Mp will not resolve any more details. For each stop, the effective resolution drops by half so at f/11 the limit is 15Mp and at f/16 it’s 8Mp and at f/22 a measly 4Mp. These numbers are just approximations and assume that you have a perfect lens. The reality is much lower than those values.

How about smaller sensors like APS-C or m43? The decrease in resolution is proportional to the crop factor. So an APS-C shot at f/8 will only have a maximum effective resolution of 15Mp while m43 will have 8Mp and so on.

Here are MTF graphs for a Nikon 50/1.4 lens comparing a 16Mp D7000 (crop sensor) with a 36Mp D800 (full frame) at f/5.6 and f/16 respectively. Notice that the resolution at those settings are very similar.

So what are the implications? If you are a landscape photographer with a 36Mp Nikon D800 and you shoot at f/8 or f/11 or maybe f/16 to gain enough depth of field you are basically wasting disk space. At f/8, your 36Mp sensor is no better than a 30Mp sensor. At f/11 it’s no better than a 16Mp D4. At f/16 it is no better than a very old 12Mp D700. So a 36Mp sensor shot at small f-stops is not able to capture enough details and yet the image size remains the same and consumes 36Mp of disk space. If you shoot at f/16 for example, you are better off shooting with a 12Mp D700. If you want to print as big as a 36Mp camera then upsize your 12Mp image in Photoshop to an equivalent of a 36Mp image. Of course the upsized image will not gain any details but it doesn’t matter because the 36Mp hasn’t resolved any more details anyway.

A related analogy is that of scanning photos. Good prints are usually done at 300dpi. When scanning photos, it does not make sense if you scan higher than that because you won’t gain anything. Scanners are capable of 4800dpi or even 7200dpi and maybe higher. If you scan a print at 7200dpi you will get a really huge image but with no more detail than when you scanned it at 4800dpi or lower. You could have just scanned it at 600dpi and you won’t notice any difference. The 7200dpi scan is a waste of time and disk space.

Another common argument is that a sensor with lots of megapixels allows more cropping possibilities. Again, that is true only if you are not diffraction limited. Otherwise you could just shoot with a lower Mp camera, upsize the image and then crop and it will make no difference in terms of details.

This is why I have absolutely no interest in the D800 and other insanely high Mp APS-C cameras like the D7100 and K-3 and A6000. I shoot mostly landscape. I stop down to f/11 and sometimes even to f/22. At those f-stops these cameras are just a waste of space, time and processing power. Again, a 36Mp full frame camera does not make sense unless you shoot mostly wide open at f/5.6 and wider. A 24Mp APS-C is stupid unless you mostly shoot at f/5.6 and wider. Manufacturers keep increasing sensor resolution instead of improving noise performance because most photographers are gullible. Megapixels sell.

Having said that, do not be afraid to shoot at smaller f-stops if the shot calls for it. Even 4Mp effective resolution is a lot if you print at reasonable sizes. And since most people never print at all, 4Mp for web viewing is GIGANTIC!

For a more comprehensive explanation of the effects of diffraction refer to this article: http://www.luminous-landscape.com/tutorials/resolution.shtml

Shoot and shop wisely. 🙂

Understanding Your Lens (Part 3)

In this third instalment of understanding your lens series, we will be concentrating on f-stops.

First, we need to clarify the word f-stop which a lot of photographers (including myself) refer to as aperture. Although f-stop and aperture are related, they are different. Aperture refers to the actual opening of the lens which is usually measured by the diameter of the opening. F-stop, on the other hand, is the ratio of the focal length and the diameter of the aperture of the lens. This is a very important distinction between the two and we will look into that in detail later on.

Every photographer should be familiar with the common f-stop numbers. They are as follows:

f22, f16, f11, f8, f5.6, f4, f2.8, f2, f1.4

Memorize those numbers because they are very important especially when you start using manual exposure mode. You should also know by now that the smaller the f-stop number, the bigger the lens opening. That is, f16 is a smaller opening compared to f8 on the same lens. The photo below shows the huge difference between f1.4 (left) and f16 (right).


(Photo taken from Wikipedia)

Notice that I emphasized on the same lens. Different lenses have different opening sizes. On a telephoto lens, f8 will have a larger opening compared to a normal lens at f2.8. Let’s see why …

Consider a 300mm telephoto lens and a 50mm normal lens. Recall that f-stop is the ratio of focal length and aperture diameter or mathematically:

f-stop = ( focal length / aperture diameter)


aperture diameter = (focal length / f-stop)

So for the 300mm lens at f8 the opening is:

300mm / 8 = 37.50mm

and for the 50mm at f2.8:

50mm / 2.8 = 17.85mm

As you can see in the above example, there is a huge difference in the diameter of the lens openings. You can actually see this for yourself if you have a zoom lens. Look through the front element of your zoom lens and then zoom in and out without changing the aperture. You will notice how the opening increases in size as you zoom to a longer focal length.

Understanding this very basic concept is important because f-stops control exposure. F-stops regulate the amount of light that hits the camera’s sensor. It is quite obvious that the larger the opening the more light comes in. The problem is that it is not enough that you know the size of your lens opening. Back to the 300mm vs 50mm example above, 37.50mm is obviously larger than 17.85mm but we also mentioned that f8 is smaller than f2.8 so what gives?!

Well there is another factor that affects the amount of light hitting the sensor and that is distance from the light source. The farther the light source, the lesser the amount of light and that’s why stars look much fainter than our sun. In photography, this distance is the focal length of your lens. The longer the focal length, the farther the rear end of the lens is from the sensor. Another thing that you should understand is that the intensity of light varies as the square of the distance. What this means is that given the same lens opening, light coming from a 50mm lens has four times the amount of light coming from a 100mm lens. Mathematically,

light intensity = (100mm / 50mm)2 = 4

It follows that for the 100mm lens to get the same amount of light as the 50mm lens, it needs to have a larger lens opening. The immediate question is, by how much larger of an opening? This is why knowing your lens opening is not enough to control light. You should also keep track of your focal length. This makes photography so much more complicated than it should be.

And so they “invented” the f-stop. Again, recall that f-stop is a ratio of the focal length and the lens opening diameter. It is very clever because it calculates the light intensity for you automatically no matter what your focal length and your lens opening are. So given this information let’s calculate the lens opening diameter for both 100mm and 50mm at the same f-stop, say, f8:

100mm / 8 = 12.5mm


50mm / 8 = 6.25mm

So double the focal length requires double the lens opening diameter for the same amount of light hitting the sensor. Imagine having to change your lens opening every time you zoom in and out. Instead, you just set your f-stop to, say, f5.6 and let the lens handle the opening according to your focal length of choice. And that is exactly what you see when you keep a constant f-stop and look through the front element as you zoom in and out. Easy!

How does f-stop relate to the amount of light hitting the sensor? Each stop of difference is double the amount of light. For example, going from f5.6 to f4 is twice the light intensity and going from f4 to f2.8 is also double the intensity. So going from f5.6 to f2.8 is four times the amount of light and so on.

Let’s summarize what we have discussed so far:

1. The amount of light hitting the sensor is affected by the lens opening (aperture).

2. The amount of light is also affected by the distance of the light source to the sensor (lens focal length).

3. A f-stop is the ratio of #2 and #1. This allows us to easily calculate light intensity because the lens automatically adjusts the aperture as focal length changes. We only have to worry about one parameter instead of two.

From this, it is easy to see why fast lenses, those with wide apertures such as f1.4, are much larger than slower lenses of f4. For the same focal length, the faster lens needs to have a wider opening diameter. This also explains why some zoom lenses have varying apertures and others have constant apertures. Lenses with varying apertures, say, f4 on their widest to f5.6 on maximum zoom, are cheaper because the aperture does not change much going from wide to telephoto and are therefore smaller in terms of diameter and only need smaller glasses. Constant aperture zooms are not only more expensive but also bigger and heavier because they have to open up much wider as the focal length increases. Wider, bigger and more glass. Finally, this also explains why m43 lenses are much smaller than their full frame counterparts. The smaller m43 sensors require shorter focal lengths to cover the entire sensor area and therefore have smaller lens opening diameters.

Before I end this post, let me address a very common misconception. A lot of photographers think that full frame sensors are better than m43 sensors at capturing light because they have larger surface areas. This is not true simply because a sensor without a lens in front of it is useless. Now with a lens in front, we know that an f-stop is the same for any sensor size. A full frame camera requires a longer focal length and therefore a lesser amount of light hits the sensor compared to a m43 camera that requires a shorter focal length because of the smaller sensor. A f-stop of f5.6 in a full frame camera allows exactly the same amount of light per unit area as f5.6 in a m43 camera.

I hope you learned something in this post. There will be more next time.

Keep shooting.

Resolution and Sharpness

Once in a while I choose to write something technical about photography. In this post, I will discuss the science behind resolution and how it relates to sharpness and details. I will also try to explain the very common misconceptions. My background is in Physics where I concentrated on optics and digital signal processing. I won’t try to go deep with the maths but instead, explain it in such a way that even those that fear numbers will understand. 

First things first. When photographers say resolution, they usually mean the number of megapixels in a camera. For example, a 36Mp camera has a higher resolution compared to a 24Mp camera. This is a very simplistic way of looking at things. The assumption is that a 36Mp camera is capable of capturing more details given the same subject. Let’s see why this is not always true:

Supposing that you are photographing very fine parallel lines with a gap of 1mm between them. If the pixel size of your sensor is 2mm wide (yeah, that’s really HUGE), it won’t be able to detect two adjacent lines from each other because it is too large — it will “step” over two lines at a time and detect them as if they are just one big line. The number of lines your sensor will detect will be less than the total number of lines in the actual subject. In fact, it will detect less than half the number of lines. If your pixel is 1mm wide, it will definitely detect more lines but not all of them (i.e. a pixel that falls between the gaps will not detect any line at all). We call this process of adjusting the width of the sensor pixels as sampling. According to physics, in order to detect all the lines, we must sample at least twice as much as whatever details we need to detect in our subject, meaning our sensor pixel width must be 1/2mm (0.5mm) or smaller. The capacity to detect individual lines is called resolving power. When we undersample (bigger sensor pixels), we lose details which results in blurrier image. We call this blurring effect aliasing.

From the example above, it is quite obvious that given a fixed sensor area, we need to have smaller, tighter packed pixels to have better sampling rate. To cram 36 million pixels in a full frame sensor, the pixels have to be smaller compared to cramming (only) 24 million pixels into the same full frame sensor area. The 36Mp sensor, by virtue of the smaller pixels, has better sampling and thus result in more details. We therefore say that the 36Mp sensor is capable of more resolving power vs the 24Mp sensor. The 36Mp sensor has higher pixel density (number of pixels per area) than the 24Mp sensor. Higher pixel density means more resolving power. 

Now here’s why this isn’t always true in photography. In the case of the 36Mp and 24Mp sensors, we are comparing two full frame sensors. Not all sensors are full frame though. APS-C and m4/3 sensors are smaller. A 16Mp APS-C sensor (in the case of Nikon D7000 or Pentax K5) contains pixels that are exactly the same size (width) as a 36Mp (Nikon D800) full frame sensor. They have the same pixel density. Therefore the 16Mp sensor has the same capacity to resolve details like its 36Mp big brother! They have the same resolving power! It also follows that the 16Mp APS-C has better resolving power than the 24Mp full frame sensor!

A bit of explanation is required when comparing the resolving power of sensors with different sizes. As we have mentioned, a 16Mp APS-C has more resolving power than a 24Mp full frame. The assumption is that everything else is constant. That is, they are using the same lens (i.e. same focal length). The same focal length results in the same optical magnification (another factor that affects resolution) although this would result in a narrower angle of view for smaller sensors. If you keep the angle of view constant, a 24Mp full frame will of course resolve more detail compared to a 16Mp APS-C.

Now before you decide to replace your 24Mp Canon 5D3 with an 18Mp Canon 7D, read further because the story does not end here. Sensor resolution is not everything.

Let’s properly define resolution first:

1. Resolution or resolving power is the capacity to detect details.

2. Image size is NOT necessarily equivalent to resolution. We have seen from the example above that a 16Mp APS-C sensor has better resolving power than a 24Mp full frame sensor even if the latter has a bigger image size.

Make sure you understand the concepts above. Photographers mistakenly attribute megapixels as resolution. They are not the same and as our 16Mp vs 24Mp example suggests, it is downright wrong! What megapixels will definitely tell you is that a higher Mp will result in a bigger image in terms of dimension and disk space usage ALWAYS.

Let’s continue. Why should you not replace your 24Mp full frame 5D3 with a 18Mp 7D? Because pixel density is not everything. There is another factor involve in resolution and that’s your lens. I am not talking about glass quality per se. I am talking about lens aperture.

Assume for a moment that all lenses are created equal. That they are perfectly “sharp”. Here’s how aperture comes into play. You should know by now that an aperture of f8 has a smaller opening than f4. Here’s the problem: light “bends”. It bends when it passes through small openings. We call this diffraction. The tighter the opening, the more light diffracts. It is like a hose with running water. If you squeeze the hose, the water will spread (bend). Squeeze it tighter and the water spreads even more.

How does diffraction affect resolution? Well, it “spreads” the details. In our line example above, instead of the lines being 1mm apart, because of diffraction the lines become thicker and thus the gaps become narrower. It will come to a point where individual lines appear to merge and two lines become one. As diffraction worsens, four lines become one and so on.

In photography, as your aperture becomes narrower diffraction worsens. For a full frame sensor for example, diffraction at f11 already worsens in such a way that the maximum resolving power is equivalent to about 16Mp. It means that even if your sensor is 36Mp, the lines have already spread in such a way that the sensor is no longer capable of detecting every single one of them. Your 36Mp sensor has now dropped in resolution as if it only 16Mp. If the aperture drops further to f22, diffraction becomes so bad that most of the lines would have already merged such that your 36Mp full frame sensor has now dropped to the equivalent of a 4Mp sensor!!! To fully utilise a 36Mp full frame sensor, one has to shoot at f5.6 or wider. At f5.6, a perfect lens is capable of fully utilising a 60Mp full frame sensor. At f8, around 30Mp. We therefore say, that the Nikon D800 36Mp sensor is diffraction limited at f8. Compare this with the Nikon D700 that is diffraction limited at f11 because it only has a 12Mp sensor. 

So what now? Well it means that if you are shooting with a D800 doing landscape which typically uses apertures of f11-f22 for maximum depth of field, you are just wasting disk space. Your image size remains the same but you are not really resolving any more detail. Again, remember that image size is a direct result of the number of pixels. It does not change. Resolution however is affected by aperture. If you are shooting at f22 with a D800, you are no better than shooting with an older D700 at the same aperture. Both will have the equivalent resolving power of 4Mp. In this case, you can use Photoshop to resize your D700’s 12Mp image to arrive at 36Mp without losing or gaining anything vs the D800’s image. At f11 and lower, the D800 has no advantage whatsoever against the D700 in terms of resolution. Further to that, the D800 has the disadvantage of being a waste of disk space.

Ok, let’s go back to why you should not replace your 5D3 with a 7D (yet). Let’s use our water hose analogy again. Think of the APS-C sensor as a bucket and think of the full frame sensor as a much bigger bucket. Point the hose with running water at them. Now squeeze the hose. Notice that the smaller bucket will soon not be able to catch all of the spreading water. Some of the water coming from the hose will now miss the smaller bucket. The bigger bucket will still be able to catch all of the water at the same “squeeze strength”. We can say that the smaller bucket has lost some water (resolution). This is also true with photography. This is why landscape photographers who shoot with large format film cameras can shoot at f64 with very sharp images. You can’t do this with your measly full frame. This is also why pinhole photography with 135 film sucks. So how about that APS-C sensor? At f22, an APS-C sensor is only capable of 2Mp maximum resolution vs 4Mp with a full frame sensor. Does this mean that APS-C is inferior to full frame? Not necessarily. If you shoot with APS-C you only need to shoot at f16 to get the same depth of field as a full frame sensor at f22. They just balance out really. 


Aperture affects resolution in ways that most photographers are not even aware of. In the case of the Nikon D800, the 36Mp sensor is just a waste of disk space starting at f11. Whether this really affects the final output depends on how close you want to pixel peep or how big you print. Fact is, at tighter apertures a D700 at 12Mp, with a bit of Photoshop magic, is as capable as a D800. It may even surpass the D800 by virtue of its better light gathering properties due to its larger pixels.  

I hope this post has made you think. Until next time.


Thanks to marceldezor for the link (http://www.talkemount.com/showthread.php?t=387).
The shots in there are very good examples of the effects of diffraction. Notice that the 4Mp at f5.6 is showing more details (resolution) than the 16Mp at f22. The latter has been reduced to effectively 2Mp. In this case, the 4Mp f5.6 shot can be upsized to 16Mp in Photoshop and it will produce a better print than the 16Mp f22 image. The 16Mp shot is practically no better than a 4Mp camera at very small apertures.

Hyperfocus: How a Cheap Lens Changed the Way I Shoot

For me, photography is all about having fun. I do not want it to be another job. I’m not saying I do not want to be a pro. In fact, I would like to get paid just traveling and taking photos. I may have taken this “fun” thing into a different level because I have grown a dislike for heavy (i.e. expensive) equipment. Yes, a 70-200/2.8 VR2 is nice to have but I seriously can’t shoot with it for 15 minutes straight without having to shake my arms to relieve myself of that tingling sensation behind my wrist. Yes, I’m cheap and that cheapness had me lose a fair amount of money because I am forced to discard my cheap equipment for something a bit less cheaper. And that brings us to the subject of this post: my cheap Sigma 17-70mm lens.

Like I said, I have grown a dislike for heavy equipment and that includes my stupid Nikon D700. So before I went for a holiday in the Philippines, I decided to get a lighter camera. It has to be small and light but it should not sacrifice image quality and most importantly, it should not get in the way of photography. So I bought a Pentax K5 which was not so cheap back then. And because I’m a cheapskate I was forced to buy the cheapest lens with the best possible zoom range because there is no way I’m gonna get another lens. One camera, one lens. That’s it and nothing more. The Sigma 17-70mm fit the bill. Very good zoom range for just about anything. It goes from 2.8 at the widest end and closes down to 4.5 at full zoom. At $339 AUD it was perfect. Not!!!

It wasn’t until the following day after that errant purchase that I noticed that this lens is terribly soft especially at the corners. At 17mm the corners are so blurry unless I stop down to f11 and even at that aperture it is never sharp. Worse, this lens can never get the focus right. I have updated the camera firmware because the K5 is known to have focus issues in the older firmware versions but that did not fix the problem. I have returned 3 copies of the lens but all of them had the same back-focusing issue. I have grown in love with my camera so I can’t return it but on the other hand, I have this lens that I can’t replace because I’m a cheapskate.

What does a cheapskate got to do? I taught myself how to hyperfocus!

First things first, let’s tackle depth of field, aka, how much of our photo is in sharp focus. We all know (I assume we all know) that we can control our depth of field by changing the lens aperture. The smaller the aperture (f11) the deeper the depth of field. If we want our subject to appear to pop out of the frame, we use a bigger aperture (f2.8) to make the background go out of focus in a creamy blur we call bokeh. Every amateur photographer should understand this. It doesn’t stop there. Another factor in depth of field control is the focal length. Longer focal length means shallower depth of field. Lastly, there is distance to subject. The closer the subject in focus is, the more blurry the background becomes. So again, depth of field is controlled by 3 variables: 1) lens aperture, 2) lens focal length, 3) distance to subject. We need a firm grasp of these 3 basic concepts otherwise the subsequent discussion would be tricky to comprehend. Actually there’s another factor which is the sensor size (or crop factor) but let’s not deal with that because we can cheat as I will show you later.

In landscape photography, we usually would want to capture the grand vista and make everything from the foreground to the background in sharp focus. A very common mistake made by beginners is to allow the camera autofocus mechanism to pick a focal point. Depending on how the camera is aimed, it may focus on the horizon and result on a blurry foreground or focus on the nearest rock and make the background go out of focus. Night time photography would be a lot more difficult because your lens would just hunt and fail to focus properly. Sometimes we get lucky and have everything in sharp focus but we want to control this instead of just relying on luck.

This depth of field control is called hyperfocusing. I’ll go slowly this time and try to explain without using any diagrams (coz I can’t).

If the camera is focused on the far horizon (infinity focus), everything from that horizon up to a certain distance between you and the horizon will be in sharp focus. So if you are standing on point A and the horizon is point C, there is a point B between A and C where everything between B and C is in sharp focus. Keep repeating that sentence until you undertand the concept. Move to the next paragraph when you think you’re ready for the next concept.

That point B, is your hyperfocal point. What that means is, if you focus at point B, there is a point X approximately half-way between point A and B where everything between X and C are in sharp focus. Still with me? So we have something that looks like this:


Point B is the hyperfocal point and everything between X and C are in sharp focus. Take your time to digest those concepts before continuing to the next few paragraphs.

The question is, how do you find point B? Others would tell you to focus one-third of the way to your subject. It’s probably a good approximation but that does not work for me. What I do is memorize a few combinations of numbers. What numbers? You probably guessed it already: the numbers that pertain to the 3 factors that control depth of field.

Here’s an example: My lens goes from 17-70mm. I’m shooting landscape so I want to go as wide as possible so I choose 17mm as my focal length. I know that my camera is soft in the corners unless I stop down to f11 so I use that as my aperture. What’s left is the distance to subject and this is where I cheat 😀 Open another tab in you web browser and point to this URL: http://www.dofmaster.com/dofjs.html. Remember that other factor that controls depth of field? Yep, the sensor size. From there choose your camera model. Yours might not be in there just like my K5 so I chose the Pentax K7 instead. And that’s cheat number one. Now enter the values of your chosen aperture and focal length in the remaining fields. In my case that would be 17 for focal length and f11 for aperture. Never mind the distance to subject field. Just click calculate and the frame on the right would automatically give you the hyperfocal distance and that is cheat number two. In my case this number is 4.25 feet. It means that if I focus on something that is 4.25 feet away from me then everything from 2.13 feet (that’s half-way) up to infinity will be in sharp focus. Don’t worry if the subject seems to be out of focus when viewed through the viewfinder. This is normal because your camera does open aperture metering (as opposed to stop down metering of old film cameras) so you are viewing the scene at full aperture, f2.8 which isn’t your real aperture when you click the shutter. Trust that math will save the day.

What I do is memorize the hyperfocal distance that correspond to the my most used focal lengths, say, from 17mm to 24mm. If I’m not quite sure of my numbers, I would compensate by closing down a stop further. For example, my lens has distance markings on the barrel. There’s one for 3 feet and the next one is 7 feet and then 10 feet and then infinity (that drunk number 8 lying on the floor). Supposing that my chosen composition requires me to zoom in to around 24mm to avoid clutter. I’m not quite sure what my hyperfocal distance is for that focal length. What I do is I set my lens focus distance to that 7 feet marker and stop down to f16. Had I remained at f11, subjects near the horizon will not be in sharp focus. By stopping down to f16, my hyperfocal distance changes such that I could get everything between 3.5 feet and infinity in focus. Neat!

So what’s the moral of the story? Do not let crappy equipment hinder your photography. Instead, try to find ways to work around the minor issues. In my case, I learned how to hyperfocus (and I hope you learned as well from reading this post). Everytime I shoot landscape, I never use autofocus. Hyperfocusing is way superior.

Until then, have fun shooting!