Category Archives: Equipment

Understanding the Effects of Diffraction

This post is a continuation of the previous article that I wrote about resolution and diffraction. I highly suggest that you read that one first so that you will gain a basic understanding of these concepts.

One thing that a lot of people still fail to understand is the absolute effect of diffraction on image resolution. A common argument of buying a higher megapixel camera is that it would “always” resolve more detail than a lower megapixel camera. That is true but only until you hit the diffraction limit. For example, a full frame camera shot at f/16 will not resolve any detail higher than 8Mp. That is, a 36Mp D800 will not give more details compared to a 12Mp D700 when both are shot at f/16. They both will have an effective resolution of 8Mp only.

To explain this, let us consider a very simple analogy. Notice that when you are driving at night in complete darkness, it is very difficult to distinguish if an incoming vehicle is a small car or a big truck if you were to judge only by their headlights. This is because the apparent separation between the left and right headlights is very dependent on the distance of the vehicle from your position. The headlights seem to look larger and closer together the farther the vehicle is from you. If the vehicle is far enough, both headlights will seem to merge as if there is just one light and you would think it’s a bike instead of a car. The reason is simple: light spreads. Both left and right headlights spread until they seem to merge and by then they become indistinguishable from each other. Diffraction is the same. Diffraction spreads light and you lose the details. Therefore it doesn’t matter if you have two eyes or eight eyes like a spider, you still won’t be able to distinguish two separate headlights if the incoming vehicle is very far. In this case, eight eyes are no better than two eyes. Both sets of eyes still see only one headlight not two. Think of the “number of eyes” as your sensor resolution. It does not matter if you have 8Mp or 2Mp, both cameras will detect only one headlight. Did the 8Mp lose resolution? No. It remained a 8Mp sensor. Did it manage to detect two headlights? No. Therefore in our example, a 8Mp is no better than 2Mp in resolving the number of headlights.

The point is that diffraction destroys details. When there is nothing to resolve, sensor resolution does not matter. Supposing that you have two lines that are very close together, diffraction will spread both lines such that they will appear to merge as if they are just one big line. If you only have one line to resolve it does not matter if you have a 2Mp camera or a 100Mp camera, both will detect only one line. The 100Mp camera will of course have more samples of that single line but it is still just one line. Diffraction does not affect sensor resolving power but it affects how the subject is presented to the sensor. Diffraction blurs the subject in such a way that it limits what the sensor can fully detect.

With that in mind, let us look at practical examples. For a full frame sensor, diffraction at f/8 is enough to blur the subject such that anything higher than approximately 30Mp will not resolve any more details. For each stop, the effective resolution drops by half so at f/11 the limit is 15Mp and at f/16 it’s 8Mp and at f/22 a measly 4Mp. These numbers are just approximations and assume that you have a perfect lens. The reality is much lower than those values.

How about smaller sensors like APS-C or m43? The decrease in resolution is proportional to the crop factor. So an APS-C shot at f/8 will only have a maximum effective resolution of 15Mp while m43 will have 8Mp and so on.

Here are MTF graphs for a Nikon 50/1.4 lens comparing a 16Mp D7000 (crop sensor) with a 36Mp D800 (full frame) at f/5.6 and f/16 respectively. Notice that the resolution at those settings are very similar.

So what are the implications? If you are a landscape photographer with a 36Mp Nikon D800 and you shoot at f/8 or f/11 or maybe f/16 to gain enough depth of field you are basically wasting disk space. At f/8, your 36Mp sensor is no better than a 30Mp sensor. At f/11 it’s no better than a 16Mp D4. At f/16 it is no better than a very old 12Mp D700. So a 36Mp sensor shot at small f-stops is not able to capture enough details and yet the image size remains the same and consumes 36Mp of disk space. If you shoot at f/16 for example, you are better off shooting with a 12Mp D700. If you want to print as big as a 36Mp camera then upsize your 12Mp image in Photoshop to an equivalent of a 36Mp image. Of course the upsized image will not gain any details but it doesn’t matter because the 36Mp hasn’t resolved any more details anyway.

A related analogy is that of scanning photos. Good prints are usually done at 300dpi. When scanning photos, it does not make sense if you scan higher than that because you won’t gain anything. Scanners are capable of 4800dpi or even 7200dpi and maybe higher. If you scan a print at 7200dpi you will get a really huge image but with no more detail than when you scanned it at 4800dpi or lower. You could have just scanned it at 600dpi and you won’t notice any difference. The 7200dpi scan is a waste of time and disk space.

Another common argument is that a sensor with lots of megapixels allows more cropping possibilities. Again, that is true only if you are not diffraction limited. Otherwise you could just shoot with a lower Mp camera, upsize the image and then crop and it will make no difference in terms of details.

This is why I have absolutely no interest in the D800 and other insanely high Mp APS-C cameras like the D7100 and K-3 and A6000. I shoot mostly landscape. I stop down to f/11 and sometimes even to f/22. At those f-stops these cameras are just a waste of space, time and processing power. Again, a 36Mp full frame camera does not make sense unless you shoot mostly wide open at f/5.6 and wider. A 24Mp APS-C is stupid unless you mostly shoot at f/5.6 and wider. Manufacturers keep increasing sensor resolution instead of improving noise performance because most photographers are gullible. Megapixels sell.

Having said that, do not be afraid to shoot at smaller f-stops if the shot calls for it. Even 4Mp effective resolution is a lot if you print at reasonable sizes. And since most people never print at all, 4Mp for web viewing is GIGANTIC!

For a more comprehensive explanation of the effects of diffraction refer to this article:

Shoot and shop wisely. 🙂

Debunking Equivalence

What is equivalence? If you haven’t heard of this term used in photography before then don’t bother; you didn’t miss anything. (Part two is here)

If you are curious though, it simply means that different formats or sensor sizes require different settings in order to produce “the same” or equivalent images. Usually, equivalence proponents use the 35mm full frame sensor as the “reference standard”. For example, for a m43 sensor and full frame sensor to have the same angle of view (AoV), the m43 will have to use a 25mm lens and the full frame a 50mm lens because the m43 sensor is smaller; four times smaller to be exact. It doesn’t end there. Since a 25mm lens has a shorter focal length compared to a 50mm there will be differences in depth of field (DoF). The shorter 25mm will have to shoot at f/4 to get the same DoF as a 50mm at f/8.

There are other “parameters” involved in this “equivalence”. For more details, refer to this article in dpreview:

That dpreview article is funnily entitled “What is equivalence and why should I care”. Should you really care about equivalence? Most photographers don’t care about equivalence. Real photographers’ shooting techniques vary depending on the camera that they brought with them. Give a photographer a mobile phone and he will capture fantastic images without pretending that he is carrying a DSLR. I own a mobile phone, several point-and-shoot cameras, a few m43’s, an APS-C and full frame cameras. I know exactly what each one of them are capable of doing and I shoot accordingly. I don’t expect shallow DoF with my iPhone so every time I shoot portraits with it I need to be careful that the background does not distract from the main subject. Here is an example of how you can capture professional-looking portraits with a simple iPhone 3GS:

Bottom line is, gear does not matter. If gear does not matter, equivalence does not matter.

But let’s not stop there. There is more to that equivalence article. To be precise, there are a lot of incorrect information in that article that are very misleading if you are not careful. The biggest misinformation that equivalence proponents spread in forums is that of “total light captured”. I will try to debunk equivalence in the next few paragraphs.

For the sake of example, let’s compare a m43 and a full frame (FF) sensor. By now you should already be aware that a FF sensor is four times larger than a m43 sensor. The m43 crop factor is therefore 2x. It follows that to shoot “the same image” we will have to use different lenses and use different f-stops like so:

m43: 25mm at f/5.6
FF: 50mm at f/11

This will result in the same AoV and DoF. Now what about the rest of the exposure triangle? This is where equivalence-fu starts becoming really stupid. The proponents insist that you could use the same shutter speed for both m43 and FF and still arrive at the same image. They insist that the same shutter speed must be used so that both images will result in the same “blurring” due to subject motion (ROFL!!!). The example above then becomes:

m43: 25mm, f/5.6, 1/125s
FF: 50mm, f/11, 1/125s

Wait, doesn’t that underexpose the FF image? Indeed it does. By two stops, to be exact! Didn’t I say it was stupid? In what world do two images, two stops apart, are considered “the same”? One is obviously darker. Much darker. Equivalence proponents must have something up their sleeves 🙂 You probably guessed it already. They say that you can bump up the ISO of the full frame shot so that it will be of the same brightness as the m43 shot! So now the example becomes:

m43: 25mm, f/5.6, 1/125s, ISO 100
FF: 50mm, f/11, 1/125s, ISO 400

Seriously?!!! Let’s be very clear about this. Bumping up the ISO does not increase light. ISO has absolutely no effect on exposure. Learn about that here. So why do you think that equivalence-fu proponents are suggesting that this ISO bump will make both images equivalent? Their reasoning is quite simple and stupid: because both sensors have gathered “the same total amount of light”!!! Recall that each stop of exposure means twice the amount of light. Since a m43 sensor is four times smaller than a FF sensor it means that underexposing the FF by two stops (4x amount of light) will still result in the same TOTAL light captured by each sensor. If that isn’t stupid then I don’t know what is.

Let’s discuss this further by using a simple experiment. Supposing that we have a m43 camera and we shoot a scene using a 25mm lens. We can produce a full frame equivalent image of the same scene with the same AoV using the same m43 camera by stitching four shots from a 50mm lens. Refer to the illustration below:

Screen Shot 2014-09-28 at 10.31.50 pm

As you can see, the smaller single shot image captured with a 25mm lens will look exactly the same as the larger stitched image which is equivalent to what a full frame sensor would have captured. The narrower AoV of the 50mm lens means that we need four shots stitched side by side to arrive at the same AoV as the 25mm shot. Again, this shows that a FF sensor is four times larger than a m43 sensor. Same AoV, same DoF but different image sizes due to the different sensor sizes.

Now let’s be stupid for a while and assume that equivalence is correct 🙂 In order for the single shot image and the stitched image to have the same total amount of captured light, we will have to underexpose by two stops, each of the four individual shots that we used to stitch the larger image. Since these four images are now much darker we will have to bump their ISO by two stops to arrive at the same brightness as the single shot image. At this point, we now have two “equivalent” images: the smaller, properly exposed m43 image and a larger full frame image that was produced by stitching four underexposed m43 shots.

Common sense will tell you that the larger stitched image is every bit inferior to the single shot image. Two stops inferior to be exact. If you sample a quarter chunk of that larger image it will always turn out much worse than the reference m43 shot. Take a quarter chunk from the top, bottom , sides, or center and every single one of them will look much much inferior to the original properly exposed m43 shot. We can therefore say that the larger image is inherently much inferior compared to the single shot m43 image. So how can equivalence proponents honestly say that the underexposed FF shot is “the same” as a properly exposed m43 shot? You don’t need half a brain to realise that this is plainly stupid.

The stupidity does not stop here though. The equivalence-fu followers have something else to support their “theory”. They suggest that if you print or view the smaller properly exposed m43 image and the larger severely underexposed FF image at the same size, they will look exactly the same. Well maybe they would look the same up to a certain extent. Recall that when you view or print an image at a smaller size than its original size then the effects of downsampling will take effect and will result in a lesser perceived noise: This, however, has absolutely got nothing to do with light gathering. As we have shown in our example, the underexposed FF image is much much darker than the reference m43 image if it were not for the ISO bump. Equivalence proponents are using image size to circumvent the destructive effects of underexposure and they think that image size and light are one and the same. Image size has got nothing to do with light. A 41Mp Nokia phone camera has a larger image size compared to a full frame 36Mp D800 although the former has much much lesser total amount of light captured. This is why if you are not careful these equivalence-fu “photographers” will easily mislead you.

Let’s take this circus show to a higher level. Assume that total light and image size are equivalent and related. In that case, we could, in a sense NOT increase the ISO of the underexposed full frame image but instead downsample it to the same size as the m43 image and they should result in the same brightness, right? Simply because the same total amount of light has now been projected into the same image area which should result in the same exposure (total light over total area). But we know that this doesn’t work because downsampling or upsampling has no relationship to total light and that is why the downsampled FF image remains two stops darker. So how could equivalence proponents honestly equate total light and image size? :-O

So now we know that equivalence-fu relies on resampling to work around underexposure. Does this always work? No, it doesn’t. If you recall the discussion in the “Understanding Exposure” article that was linked above, bumping up the ISO does not increase light. It only increases gain. The analogy was that of the process of boiling water. Increasing ISO is like boiling water. Boiling pushes water to the top of the container but it does not increase the amount of water. If you underexpose, you will come to a point where there is no more light being captured. It’s like a container with no water. Bumping the ISO or boiling a container that does not contain water does absolutely nothing. Image noise is more pronounced in darker areas. Underexposure will only worsen the noise in those darker areas. When you have no signal, there is nothing to resample. Downsampling will not always save you.

The nasty effects of bumping up the ISO can not be ignored. Increasing the ISO will also result in hot pixels, banding and other nasty artifacts. Why do you think are cameras limited by how high you can set the ISO sensitivity? Why can’t we not bump the ISO indefinitely? Because the truth is, high ISO sucks regardless of sensor size. Imagine an ISO 6400 shot from a m43 Olympus E-M5 compared to an ISO 25600 shot from a full frame Nikon D800. How much worse does it get if you now compare a point-and-shoot camera with 5x crop factor to that D800. Five stops underexposure is A LOT and really bad. I mean really, try underexposing a night shot on your D800 by 5 stops then bump it up in Photoshop. Crash and burn baby!

If you think that’s bad then consider shooting with slide film. How big is a sheet of film for a 8×10 view camera compared to a measly 35mm camera? For the sake of argument let’s just say that the size difference is 5x. Do you really believe that if I shoot Fuji Velvia on 35mm and then I underexpose Velvia on the 8×10 camera by five stops and push it during development that the images will look “the same”? If this was negative film then maybe you can get away with it but don’t even attempt that kind of circus act with slide film. Slide film is very unforgiving when it comes to exposure. Five stops is almost the entire usable dynamic range of slide film!!! If a photographic “theory” fails miserably with film then that “theory” is simply wrong. In the case of equivalence, it’s bullshit, plain and simple.

So to answer that dpreview article’s question: “should you care about equivalence?”. Not if it’s wrong and stupid.


I can’t believe that people keep on spreading this nonsense. Here’s another funny equivalence-fu fauxtographer: equivalence for embeciles

Examine his illustration on the effect of different apertures f/8 and f/4. He is totally oblivious to the effect of focal length on light intensity. Note that although f/8 and f/4 here have the same physical aperture size, the longer focal length of the f/8 lens causes the light to be projected much wider into the sensor. The net effect is that each sensel behind the longer f/8 lens receives much lesser number of photons than the sensels behind the shorter f/4 lens. The result is underexposure which is seen as a darker image. Two stops (or 4x light) of underexposure to be exact. This obviously corresponds to noisier sensel output and therefore noisier image.

How can two images with different exposures be equivalent?! Such an idiotic explanation is the result of the epic failure to understand very basic photography. Exposure is totally independent of sensor size. The same f-stop results in the same total number of photons per sensel regardless of imaging format. Always. Same f-stop means same exposure meaning the same brightness.

Olympus Stylus 1

A few months ago I joined the Asia-Oceania Olympus Grand Prix photography contest. There were two categories: landscape and camera effects. The latter requires that you use the built-in camera effects in your shot and this is where I won the Stylus 1. You can find the winning entries here.

To be honest, I did not expect much from a point-and-shoot camera. I have read the specifications of the Stylus 1 and among the many features I was most curious about the lens. It has a 28-300mm constant f/2.8 full frame equivalent. This kind of lens is unheard of. You can’t find a lens with this specification anywhere. Not even in the CaNikon world. This feature alone got me really excited. It would be the perfect travel camera if it performed well.

As soon as the camera arrived, I immediately tested it using whatever charge is left in the factory-sealed battery. The first thing that caught my attention was the aperture ring in the lens. This is absolutely awesome. It feels like shooting with my Nikon FM3A film camera with AiS lens again. Heck, this is even better than the overhyped Nikon Df’s very clumsy implementation of the aperture adjustment dial. I was over the moon! Hey, it’s got an electronic viewfinder too that rivals the size and quality of my high-end E-M5. This camera is big in features and it still fits inside my jacket pocket!

It took me a few days to seriously test my camera. My day job was getting in the way of fun LOL! When I finally managed to go out during lunch break I walked around the city to capture some shots. Let’s have a look at how this tiny camera performed …

Note that the images I’m presenting here are all JPEG shots straight from the camera with absolutely no editing done. No cropping even. I just had to rotate the portrait oriented shots in Snapseed though because my iPad (which I’m using to type this) does not recognise the rotation info. They were also shot completely handheld. Click on the images for a larger view.

Low light shooting is the main weakness of P&S cameras due to their small sensors. It made sense for me to try shooting inside the church that was close to my office to see how this thing performs.


That’s ISO 800 at f/2.8. I really like how it handled the colours, the highlights and shadows. It’s quite sharp too.

After work, I took some evening shots on the way home:




This camera is the perfect travel companion so I brought it during our recent trip to the Snowy Mountains and Victoria. Here are some of the shots that I took.




I especially like the way it handled the backlit shots. I could not see any posterisation or nasty abrupt highlight clipping at all. The gradiation is very smooth. Note the absence of flare as well.

Let’s see how it does bokehlicious shots:



It’s not a bird photographer’s wet dreams but for a casual snap I quite like it.

Overall, it’s a really nice camera. I dare say that if I were to travel for a few months and bring just one camera and one lens, I will seriously consider this over my full frame Nikon or any other camera. The 28-300mm f/2.8 is just too convenient to leave behind. The built in wifi allows me to remotely control the camera and transfer the photos directly to my iphone for easy sharing to social media. It is what a travel camera should be. Good thing that it’s small so I really don’t have to make that decision. I can bring it anytime anywhere together with my other bulky cameras. It’s a no-brainer.

The Stylus 1 isn’t perfect though. Autofocus starts to hunt in low light. Being an electronic zoom lens, it’s not precise. It’s just like any other P&S with jerky , “gappy” zoom movements. Other than that, I can’t fault this camera at all.

Would I recommend that you get one? At $699, you must think hard if you really need that big of a zoom range because this camera is quite sharp even at full zoom and wide open at f/2.8. For that price, you can get a decent m43 camera kit or even an entry level APS-C DSLR. Remember that there is no way you can get a 300mm f/2.8 lens, more so a zoom, at $700. That’s just not possible, at the moment, outside of the Stylus 1. I’m just very lucky to have gotten this camera for FREE.

Tempting? You decide.

Sony NEX6 Low-light Test

For the past few weeks I have been carrying the NEX6 to work every day. I like how light it is and how it fits my small hands. I am a very tactile person and this is very important for me.

I have already blogged about this camera not long ago. Again, this isn’t a review of this camera. I don’t think you can even buy this camera brand new anymore. It has been discontinued and superceded by the A6000.

Anyway I took the camera for some evening shots on my way home from work. All of the images below were shot handheld and some of them at ISO 6400. Compared to my other 16Mp small camera, the Olympus E-M5, I kinda like how the Oly handles the high ISO grain. Again, proof that it’s not just the size of the sensor that matters but the overall package including in-camera noise reduction and JPEG engine. IMHO, nothing beats Olympus when it comes to JPEG processing. The NEX6 though is also a good performer. I like how it handles JPEG better than my Pentax K5IIs and even my full frame Nikon D700 (the worst of the bunch in my opinion).

Here are some of the shots that I took in a span of one hour within a radius of about 200 meters. These are all completely UNprocessed JPEG shots straight from the camera in full resolution using only the 16-50 collapsible kit lens that came with it. No cropping, no retouching. Just the pure goodness of the NEX6.


In-camera black-and-white conversion looks good as well:


I had to push harder when it turned completely dark. Here are some shots of the casino.




That’s it for now. Next time, I’ll see if I can push it even harder 🙂

Megapixel Hallucinations

If you are here to understand (why) equivalence (is wrong) then read this:

This post is practically a continuation of one of my controversial posts on debunking the myth of full frame superiority. In that previous post I discussed why full frame is actually no better than it’s crop sensor counterpart (Nikon D7000 vs D800) in terms of light gathering capability. Now I will try to discuss another aspect of full frame superiority and explain why it leads people to believe that it is superior to smaller sensor cameras when in fact it is not.

A common source of sensor performance data is DXOMark. This is where cameras are ranked in terms of SNR, DR, Colour depth, and other relevant aspects of the equipment. It is important to note that data from this website should be properly interpreted instead of just being swallowed whole. This is what I will try to cover in this post.

One of the most highly debated information from DXOMark is that of low light performance which is measured in terms of Signal to Noise Ratio (SNR). SNR is greatly affected by the light gathering capacity of a camera’s sensor and this is why this is commonly used to compare the low light performance of full frame and crop sensors. This is also the most misinterpreted data by full frame owners. They use this information to justify spending three times as much for practically the same camera. Let’s see why this is wrong…

Consider the following SNR measurements between the Nikon D7000 and D800:


Isn’t it quite clear that the Nikon D800 is superior to the D7000? Did I just make a fool of myself with that “myth debunking” post? Fortunately, I did not 🙂 I’m still right. That graph above is a normalised graph. DXOMark is in the business of ranking cameras and that is why they are forced to normalise their data. Let’s have a look at the non-normalised graph to see the actual SNR measurements:


Didn’t I say I was right? 🙂

The Nikon D7000 and D800 have the same low light performance! That is because they have the same type of sensor. The D800 is basically just the D7000 enlarged to full frame proportions. Simple physics does not lie. A lot of “photographers” have called me a fool for that “myth debunking” post. Well, I’m not in the business of educating those who are very narrow-minded so I will let them keep believing what they believe is true. But some of us know better, right? 🙂

Let’s not stop here. Allow me to explain why the normalised graphs are like that.

Let me tell you right now that DXOMark is unintentionally favouring more megapixels. That’s just the inevitable consequence of normalisation. Unfortunately, those who do not understand normalisation use this flaw to spread nonsense. The normalised graphs are not the real measured SNR values but are computed values based on a 8Mp print size of approximately 8×10. The formula is as follows:

nSNR = SNR + 20 x log10(sqrt(N1/N2))

where nSNR is the normalised SNR, N1 is the original image size and N2 is the chosen print size for normalisation. In the case of the Nikon D800, N1 = 36Mp and for the D7000, N1 = 16Mp. They are both normalised to a print size of N2 = 8Mp. Based on that formula, the D800 has a SNR improvement of 44.93 up from measured SNR of 38.4. The D7000 though only improves a tiny bit to 41 up from 38. As you can see, although both cameras started equally, the normalised values have now favoured the D800.

This increase in SNR is not because the D800 has better light gathering capability. This apparent increase in SNR is due to downsampling. It’s due to the larger image size and not because of better light gathering capability. Unfortunately, this computed SNR is what the full frame fanbois are trying to sell to uninformed crop sensor users. It is the REAL measured SNR that matters and we will learn later on how important this is compared to just more megapickles.

Go back to that normalisation formula and note the term inside the square root (N1 / N2). Note that if N1 is greater than N2 then the log10 becomes a positive number and the whole term adds to the measured SNR. The term drops to zero for N1 = N2 and that’s why when a D800 image is printed at 36Mp, the SNR is the measured SNR. Same goes for the D7000 when printed at 16Mp. That is why when I blogged about noise performance comparisons I kept repeating that images should be printed at their intended sizes. That’s the ONLY fair comparison. Downsampling is cheating. You do not want to buy a 36Mp camera so you could print it at 8×10. That is an absolute WASTE of money.

The idiots will of course justify by saying “well the good thing with having a larger image is that you can downsample and it will outperform a smaller image“. Well not so fast, young grasshopper. That is not true. We know that SENSEL size generally results in better light gathering capacity (Rain Can Teach Us Photography) although this means smaller image size. Let’s consider the D800 vs D4:


So the real SNR shows the D4 (42.3) being superior compared to the D800 (38.4). Again, when normalised to a 8Mp print, the D800 somehow “catches up”:


Unfair isn’t it? Well, only for smaller prints. Using the same formula to compute the SNR in a 16Mp print, the D4 drops to its real measured SNR of 42.3 while D800 SNR drops to 41.92. So now the D800 is inferior to the D4! How about for a 36Mp print? The D4 drops to 38.77 and the D800 drops to its real measured SNR of 38.4. The 16Mp D4 upsized to a whooping 36Mp print BEATS the D800 in its own game!!!

In the comparison above between two full frame cameras we see that even if the total amount of light, which is proportional to the sensor size, does not change, variations in SNR can occur if resampling is added into the equation. Clearly, total light and resampling are unrelated. Just because one sensor has better noise performance at a given print size does not imply that it has better light gathering capacity. If 8Mp was the only print size we could make, one would think that the D800 is every bit as good as the D4. This is clearly not the case at larger print sizes where the D4 outshines the D800. The same argument can be said for comparisons between sensors of different sizes. Sensor performance should not be judged based on arbitrary print sizes. Sensor performance must be taken at the sensor level. 

Think about it: every time you print smaller than 36Mp, you are WASTING your D800. Who consistently prints larger than 16Mp or even 12Mp? As you can see, the superior 16Mp sensor makes a lot more sense. The D800 is a waste of space, time, and money.

In essence, a 16Mp sensor, be it full frame or crop can beat the 36Mp D800 if it has high enough SNR. The crop sensor need not match the superior D4 sensor. A 16Mp crop sensor with the same SNR performance as the 7-year old Nikon D700 will beat the D800 at print sizes of 16Mp and higher.

Let’s summarise what we have covered so far:

0. DXOMark data needs to be analysed. Better SNR performance in normalised data does NOT imply better light gathering capacity of full frame sensors but merely a consequence of larger image size in terms of megapixels.

1. DXOMark normalises their data because they are in the business of ranking cameras.

2. Normalisation to a small print size unintentionally favours sensors with more megapixels.

3. More megapixel does not necessarily lead to superior SNR when downsampled.

4. At larger prints (16Mp and higher), the weakness of the 36Mp D800 sensor begins to show.

5. A good quality crop sensor camera with lesser megapixels can beat a full frame camera with insane megapixels.

Do you believe me now?

Understanding Exposure

This is a continuation of my previous post on where I used the analogy of a rain gauge (Rain Can Teach Us Photography) to understand photographic exposure. Here we dig deeper into understanding what photographic exposure really means in terms of real photography. 

You might have heard of the concept of exposure triangle. This concept explains the interplay between three independent aspects of photographic exposure namely:

1. f-stop

2. shutter speed

3. ISO sensitivity

Item #1 is more often incorrectly referred to as aperture. Although f-stop involves aperture, saying that f-stop is equivalent to aperture is photographically wrong because aperture alone totally ignores the effect of focal length in the intensity or amount of light that hits the sensor. I have covered this in detail in this post: Understanding Your Lens (Part 3). Please read that post if you have difficulty understanding this concept.

The main reason that I am discussing this supposedly understood-by-all-photographers concept is because it’s actually misunderstood by a lot of photographers – myself included until after reading and conducting experiments. Before I start, I would like to acknowledge a friend, Dan Bridges, for introducing and helping me understand this concept. It is not really a difficult concept to comprehend but it will surely change the way you think because we are accustomed to thinking in terms of the exposure triangle.

Let me start by saying that the exposure triangle is not entirely correct. Yes, you read that right. In fact, when you talk about exposure it’s really just the first two items: f-stop and shutter speed. ISO is not really a part of exposure and you will soon understand why … hopefully.

Begin by understanding that the amount of light in the scene that you are trying to capture is fairly constant throughout the entire exposure. Unless you are doing very long exposures in a disco bar or covering a concert gig, the ambient light is practically unchanging. Therefore whatever light that comes into the sensor chamber is basically controlled by your chosen f-stop and shutter speed.

Let me repeat that: the amount of light hitting the sensor is only affected by f-stop and shutter speed.

So why is ISO sensitivity not part of the equation? Because for a particular camera, ISO sensitivity is constant. You do NOT have any control of it. Surprised?

The immediate reaction is, “of course I can control my ISO”. Yes, cameras let you change the ISO but you are not really changing the ISO. It’s not real. You are lead into thinking that you are changing the ISO when in fact you are not. What you are changing is not the ISO sensitivity of your sensor but the brightness of the image. What you are changing is technically the gain.

The ISO sensitivity of any given camera is FIXED at manufacturing time. This is called the native or base ISO. It can not be modified at all. It is very important then that you know the base ISO of your camera. Read your camera’s manual. The Nikon D700 for example has a base ISO of 200 while the D800 has a base ISO of 100. We say that the D700’s sensor is more sensitive than the D800’s. If you think of sensors as water containers, differences in base ISO is like differences in the height of the containers. Same opening size but different heights. A sensor with higher base ISO is like a shallower container. It means that an ISO 200 container will fill up quicker than an ISO 100 container. Here’s an illustration:


Allow me explain further. If you pour water at the same rate into two containers where the only thing different between the containers is their height, they will obviously have the same water level after any given time. Photographically speaking, same water level means same light level meaning same exposure. However, if you continue pouring water at the same rate, there will come a time when the shallower container will overflow and then water will start spilling for that container. As we have said, a higher ISO is like a shallower container. The sensor with higher base ISO will overexpose quicker compared to a sensor with lower base ISO. Water (light) will spill more quickly for the shallower container (higher base ISO). This means that if for a given ambient light, f5.6 at 1/125s is just enough to fill up an ISO 100 sensor to its brim, the same f5.6 at 1/125s will overexpose the ISO 200 sensor causing light to spill somewhere else (blown highlights).

As you can see, exposure is really NOT about the ISO but the amount of light that gets into the sensor. The base ISO is more of a warning label telling you not to overexpose your sensor or else light will start to spill. If a water container says it has a 100ml capacity, you would not want to pour 200ml of water into it. Makes sense?

Recall that base ISO is fixed and it can NOT be modified. It follows that the only way to control exposure is by f-stop (container opening) and shutter speed (total time that you are pouring water into the container).

Now let’s try to understand brightness. This is different from exposure. The definition of brightness is as camera-specific as that of base ISO. For the sake of simplicity, let’s say that an empty sensor produces a pure black image and a completely filled up sensor produces a pure white image. Since different sensors with different base ISOs fill up at different rates, it follows that they have different definitions of what is black or what is white.

Now here is an interesting outcome: Supposing that f5.6 at 1s is just enough to fill an ISO 200 sensor. It means that for that sensor, f5.6 at 1s produces a pure white image. That same f5.6 at 1s though is not enough to completely fill up an ISO 100 sensor. Therefore the same exposure will produce a slightly darker image for the ISO 100 sensor simply because the sensor is not completely full. Note that they have exactly the same amount of gathered light but they are producing totally different images. To produce the same pure white image, the ISO 100 sensor will have to be exposed longer at 2s for the same f-stop of f5.6.

Here’s another interesting fact: You can actually make the ISO 100 sensor in the example above produce a pure white image at the same f5.6 at 1s exposure. How? By artificially filling up the sensor until it’s full by adding “something”. If light in a sensor is like water in a container, you can make the water reach the top by boiling it. The amount of water will be the same but the act of boiling it has made it fill the container to its brim (and possibly spilling some of it). This “act of boiling water” is what happens when you increase the ISO in camera.

Increasing the ISO in camera does NOT add light to the sensor at all. It does not increase the exposure. It only artificially fills up the sensor with something. It boosts the signal. Unfortunately boosting the signal boosts everything including noise. The problem with increasing the ISO is not the act of boosting the signal itself. The main problem is that sensors have inherent noise in them already – signal or no signal. In darker areas where there is no light (signal), noise is still present. That is why if you boost the darker areas of an image, what you are boosting is just noise because there is no signal. Noise is more pronounced in darker areas of an image at high ISOs. This is why you do not test the high ISO performance of your camera in good light. That’s cheating. You should test high ISO in low light.

And now we finally arrive at an interesting consequence. Supposing that you are shooting in low light and you have chosen ISO 1600 so that you can hand hold your camera at a shutter speed of 1/125s at f5.6. Since exposure is only affected by f-stop and shutter speed, you can actually shoot at your base ISO, say, ISO 200 at the same 1/125s at f5.6 without affecting the final image. Of course when you look behind your camera’s LCD, the image will be very dark and you won’t probably see anything. However, when you get to your computer, you can use the exposure slider in Lightroom or Photoshop to boost the signal and arrive at the same image as the camera-boosted ISO 1600 image. The reason this is possible is because the exposure is the same. It’s still f5.6 at 1/125s. You either choose to boost the signal in camera by increasing the ISO or shoot at base ISO and boost the signal later in the computer. The advantage of doing this boosting in the computer is that modern software are smart enough not to boost highlights that are near clipping point. Your camera is not that smart and it will boost everything thus causing blown out highlights.

Disclaimer: that last paragraph is not always true. Some sensors behave differently. Sensors are actually more complicated than just a simple container so experiment with your camera.

This discussion won’t be complete without covering fake low ISOs that are in every camera. For example, the Nikon D700 has a base ISO of 200 but it also has Lo 1 which is equivalent to ISO 100. This lower fake ISO allows you to shoot at longer exposures. Since we know that the sensor ISO is fixed, fake low ISOs won’t actually gain you anything. The longer exposures will only cause areas of highlights to blow up. This is no different to applying the same ISO 100 exposure, say, f5.6 at 1/125s to an ISO 200 sensor. The ISO 200 sensor will be overexposed by a stop. So the same thing happens when you use a lower fake ISO and increase the exposure. Your sensor will be effectively overexposed.

To summarize everything:

1. Exposure is only affected by f-stop and shutter speed.

2. A camera’s base ISO is more of a warning label saying do not exceed your exposure beyond this point. It is a fixed value.

3. Increasing the ISO in camera only boosts the brightness of the resulting image. It does not increase the sensitivity of your sensor. With some cameras, you are better off boosting the image later using a photo editing program.

4. Fake low ISOs will do you no good. If you need a longer exposure then use a longer exposure using your base ISO. The consequences will be the same: blown highlights.