Rain Can Teach Us Photography

Before I start the discussion, I would like to refer you to my previous posts because I have already covered this concept extensively:



If you have read and understood those posts above then you can save yourself time by not reading this one…although I don’t mind if you do read this because this looks at sensors in a different perspective. I will try to cover some technical aspects in the form of analogies.

I noticed that not everyone who carries a camera actually understands photography so I’m hoping that my explanations here would help them. I will use rain as an analogy. I hope everyone has experienced rain and understands rain.


Do you know how they measure the amount of rainfall? They use a device called a rain gauge. It’s a very simple device and anyone can make their own. The simplest rain gauge is that of a basic straight container that looks something like this:


All you have to do now is wait for rain over a period of time and then measure the height of the collected rainwater. Really simple. Now you might wonder why I did not specify any measurements. How big should the container be? Surely, a larger container will gather more rain! That is correct. A larger container will collect more rain but the rainwater level will remain the same. Why is that? Because a larger opening also needs to fill a larger volume. That’s why rain is measured in mm (height) and not ml (volume). Here’s a very simple math that explains this:

Volume = area of opening x height of rainwater


height of rainwater = Volume / area of opening

Notice that if you increase the opening you also increase the volume and the height remains the same. They are proportional.

What does this teach us about photography? The concept of a rain gauge is analogous to that of photographic exposure. First you have your container opening which is your aperture. Then you wait for rain to fall over a period of time which is your shutter speed. The proportion of the container opening to its volume is your f-stop. A rain gauge is like photography where everybody has agreed to shoot at the same f-stop. The measured rain level is your exposure and determines how bright the image will turn out. As we have mentioned before, the size of the opening is proportional to the volume and therefore the measured rain levels are the same irrespective of container size. Recall that a f-stop is the ratio of the lens focal length (container height) and aperture diameter (container opening). That’s why a f-stop is a f-stop. A f-stop is the same no matter how big your lens is. A 35mm at f5.6 will have the same exposure as 100mm at f5.6 although the latter has a much much larger opening.

It follows that if we use the same lens on different sensor sizes we get something that looks like this:


The red circle is your lens’es image circle (area of rain). The rectangles in the middle are your different sensor sizes (rain gauges). From our rain gauge analogy above, you will realize that both sensors will have the same exposure (measured rain levels). A f-stop is a f-stop irrespective of sensor size.

Before you continue with the rest of the discussion, make sure that you understood the very basic concepts covered above because I will start explaining something that is often hotly debated in forums: sensor noise!

The full frame proponents will tell you that because of sensor size advantage over APS-C and 4/3rds and other “crop” sensors that it will have lesser noise and therefore cleaner output. The basis for this conclusion is the fact that larger sensors gather more light. Let’s discuss this in detail…

If you go back to our rain gauge analogy, it’s quite obvious that a larger container will gather more rain although the measured rain levels remain the same. Therefore a larger sensor will gather more light although the exposure remains the same. Now since noise is affected by the amount of light, therefore a larger sensor has lesser noise. Therefore, full frame is superior.

Well not so fast Pedro. A camera’s sensor isn’t really like a rain gauge. A sensor is actually composed of smaller components called sensels which are like smaller containers within a bigger container. It looks like this:


So now we will have to narrow down our analogy to those smaller containers (sensels) instead of the whole sensor. You could probably see where this is going. You will notice light is gathered by the sensels and therefore noise is NOT affected by sensor size but by SENSEL size.

Again, sensor size does not affect exposure (rain level analogy). A f-stop is a f-stop (rain gauge analogy) and is not affected by lens focal length or sensor size. Therefore a smaller sensel will have the same exposure as a larger sensel. However, a larger sensel gathers more light therefore it will have lesser noise. This is why a 12Mp full frame has better noise performance vs a 12Mp APS-C sensor. This is also why a 12Mp full frame Nikon D700 has way better noise performance vs a 36Mp full frame D800 by virtue of the larger sensels. This is also why a 16Mp APS-C D7000 has the SAME noise profile as a full frame 36Mp D800.

And thus, we arrive at the following conclusions:

1. SENSOR size has no effect on exposure.

2. SENSOR size has no effect on noise.

3. SENSEL size ultimately affects noise.

Again, it follows from above that the same sensors will have the same noise profiles (e.g. Nikon D7000, Pentax K5/K5II and Nikon D800) even if the sensor sizes are different as long as they are exposed in the same way; same f-stop, same shutter speed. You will find shot comparisons between those sensors in http://dpreview.com and they are in agreement with the conclusions above.

Now you might have read from others about something called equivalency. They say that unless different sensor sizes are exposed equivalently then they will have different noise profiles. For example, a APS-C sensor with a lens set to 35mm/f5.6 is equivalent to a full frame sensor with a lens set to 50mm/f8. Although they have different focal lengths and f-stops, they are equivalent in terms of angle of view, aperture and depth of field, all because of the crop factor of approximately 1.5. While it’s true that AoV and DoF are equivalent, they will certainly have different noise profiles. Firstly, it’s quite obvious that if you use the same shutter speed, the 35mm/f5.6 will be overexposed by a stop. So if you use the same shutter speed, you will have to stop down the 35mm to f8 which will decrease the aperture size and somehow this will affect noise?! We know from the rain analogy and sensor design discussion above that this is simply UNTRUE! I don’t know why the equivalency proponents keep pushing this concept when photographically it does not make sense. This equivalency-fu is like using rain gauges that do not adhere to the same standards and they will end up looking like this:


Notice that they have the same opening area of 6.25mm:

35mm / 5.6 = 6.25mm

50mm / 8 = 6.25mm

But because of this equivalency brouhaha we now have skewed rain gauges. Notice that you will have to gather rain over a longer period of time for the 50mm/f8 container to arrive at the same rain level as the 35mm/f5.6 container. Full frame proponents think that they are the standard so the illustration above would probably look like this from their perspective:


So now the crop sensors will have to expose at a shorter period of time just so they could abide by the standards set up by the elite full frame shooters.

For me, this is just silly. I feel that all these comparisons between full frame and smaller sensors are nothing but silly justifications for the perceived superiority of a particular sensor. Discussing equivalency is fine as long as it’s still about photography. It’s ok if you explain crop factor in terms of AoV or DoF but when you start using this as a tool to push the perceived superiority of your more expensive equipment then it’s really just bullshit. Bullshit and downright very wrong and misleading. Stop it.

















10 thoughts on “Rain Can Teach Us Photography”

  1. “2. SENSOR size has no effect on noise.”

    Can’t agree with this. Consider D7000 vs D800.


    Look at the SNR (18%) graph. First off select the “Screen” tab, which is a way of comparing the sensel performance. Since both cameras have the same sensel, the curves are on top of each other.

    Now switch to the “Print” tab which normalises the 16MP & 36MP sensor to the same output size i.e. an 8MP A4 printout. (The normalised size itself does not matter for this comparison – what’s important here is that both cameras are producing the SAME SIZED OUTPUT image.) Now you can see that, while the sensels in both cameras received the same EXPOSURE, due to the same Scene Luminance, Shutter Speed & F-Stop, that the bigger sensor has a better SNR due to 2.25x more TOTAL LIGHT being captured during the exposure by the 2.25x larger sensor.

    Now if we were to print the 36MP version with 50% bigger sides (i.e. with a 2.25x bigger area) than the 16MP version, then view them at the same distance, both images should have the same amount of visible noise, but one looks bigger.

    Or we could move the bigger printout a bit further back so it subtends the same AOV as the smaller, closer image. Now both would look the same relative size and have the same amount of apparent noise. In effect, we have resized both images to the same visual size.

    But if we print out these two images to the same output size and view them at the same distance, the 36MP version will have less apparent noise.

    With same-sized sensels, the improvement in SNR here is 20log(sq-root(36MP/16MP)), which is 3.5dB.


    Another way of thinking about this noise improvement due to reduction is to remember that users are advised to output nosier images to a smaller display size. So noisy images look OK at web resolution, but look quite noisy you print them out at A4 or A3 size.

    1. See the problem with “normalization” is that it rewards more megapixels instead of pixel quality. That is why the same dxomark ranks the D800 better than the D700 in high ISO performance. The D700 actually has significantly cleaner shots than the D800. That is a fact.

      Dxomark has no choice but to normalize their results to a particular print size because they are in the business of ranking cameras. Ranking requires normalization. The results though should not be treated as bible. Noise is ultimately measured at the pixel level as explained in the blog post above.

      Photographers are free to choose between quality pixel over more pixel. More pixel does not always win. In fact in a lot of cases, like in landscape photography for example, more Mp is just a waste of space because of diffraction limited resolutions. The D800 for example drops to 27Mp effectively at f8 and to 15Mp at f11 — very common f-stops for landscape. At these settings you have less resolution to play with for downsizing and pixel quality becomes a more important factor.

      If the intention of having more Mp is to downsize the final image then truly it is just a waste of space and processing power. These are resources that could have been used to create cleaner, high quality output at the pixel level.

      1. “See the problem with “normalization” is that it rewards more megapixels instead of pixel quality”

        Demo, no it doesn’t. It ACCOUNTS for the difference in MP and sensor area and ADJUSTS for it, but it doesn’t REWARD more MP. I think you’re too caught up in individual sensel performance. We look at the whole image either on a screen or in a printout, rather than spending time close up doing 100% pixel peeps.

        Smaller sensels tend to perform worse than big sensels, but since we’re interested in the final image, not the sensels, we need some way to compare cameras with sensors have differences in either MP, sensor size (i.e. different formats), or both. Otherwise we’re not comparing apples with apples. Same-sized printouts is the fairest way to do this. More MP do not have to be used purely for bigger printouts or more cropping freedom.

        More MP, while a technical challenge, offer benefits beside just resolution. More MP are also better for improving demosaicing accuracy, when correcting CA, tilt or distortion, when applying deconvolution correction of focus errors and motion blur, prevention of moire in AA-less systems, the size of noise grain etc. It’s a more accurate sampling of the actual lens output, even when the sensel pitch is smaller than the diffraction limit.

        The increase in MP/shrinking of sensel size creates big problems in colour crosstalk, individual sensel noise and DR/FWC, camera processing power, storage capacity, R/W speed and PC processing requirements. A lot of work is going on in this area:


        I think temporal oversampling (up to thousands of exposures per shot – obviously done with an electronic shutter) and perhaps spatial oversampling (the combination of nearby very small sensels) will eventually become commonplace. See this paper for an interesting glimpse into the future:


        Ultimately the goal will be a sensor with 1-photon sensel (i.e. a photon-counter) in a 1000MP+ sensor, the QIS:



      2. This discussion on downsampling brought back memories of the Fuji Super CCD SR sensors that they equipped their S series DSLRs with. The technology used two photodiodes per photosite to improve dynamic range. This is an example of actually using more “resolution” as a way to improve output. The D800’s resolution was never intended to improve output. The cleaner output at smaller print sizes is just a side effect because the truth is that people don’t really print that big if they ever print at all.

    2. Thanks btw for explaining how dxomark works. I have played around with it a bit and compared the D800, D700 and D4; all full frame cameras. It’s quite interesting and frustrating to learn that after 7 years of “development”, the D4 sensor still can’t beat the D700 sensor in terms of noise performance. That’s both at 100% magnification and normalized 8Mp print size.

      1. “I have played around with it a bit and compared the D800, D700 and D4; all full frame cameras. It’s quite interesting and frustrating to learn that after 7 years of “development”, the D4 sensor still can’t beat the D700 sensor in terms of noise performance. That’s both at 100% magnification and normalized 8Mp print size.”

        I can see significant improvements in these figures over the D700. Combining Sensorgen’s data with my solving of the best curve-fit of the Total RN data, so as to separate out the Sensor Read Nose and the ADC Noise components:

        Sensel Pitch 8.4µm
        QE 38%
        FWC 58111e-
        Sensor 5.3e-
        ADC 13.5e-

        Sensel Pitch 4.7µm
        QE 56%
        FWC 44972e-
        Sensor 2.8e-
        ADC 4.0e-

        Sensel Pitch 7.2µm
        QE 53%
        FWC 117813e-
        Sensor 2.1e-
        ADC 19.7e-

        First off, the D4 & D700 are not Sony Exmor sensors, so this explains a lot of the differences between them and the D800. The D4 appears optimised for high ISO noise performance and shooting speed.

        1. The D4, despite having smaller sensels than the D700, has a better QE (probably due to better micro-lenses), and somehow manages to double the FWC. Notice how Sensor RN is less than half that of the D700. This is very good sensel noise performance.

        2. The D4’s ADC noise is worse. It’s hard to get both speed and lower ADC noise, unless you’re using Exmor technology. Ask Canon.

        3. The D800 has excellent ADC noise performance due to Exmor. Its sensel noise performance is good, but not as classy as the D4.

        Comparing now the DxoMark curves, I’ll stick to the “Print” tab:


        Looking at the SNR 18% curve, it is obvious that the much bigger FWC allows the D4 to go to a lower base ISO than the D700, which really stops at ISO200. This better FWC contributes to a 4.2db better SNR performance at base ISO. The better QE is not obvious here.

        Looking at the DR graph, the superior ADC noise performance of the D800 is on show. The lack of flattening of the DR curve at low ISO is due to very low ADC noise.

        However at high ISO, where sensor RN dominates, the D4 DR curve is clearly superior. The DR here is also helped by the high initial FWC. While this reduces with each extra stop of analogue gain, since it starts with such a high initial FWC value, the Ssat (system saturation due to the ADC reaching Full Scale, instead of the sensor reaching FWC) is still quite large. And either the FWC or Ssat divided by the Total RN gives the DR value, so that’s why it’s so good here.

        Finally, switching to the Scores page, and looking at the Sports/LL score which is usually the ISO at which the SNR falls to 30dB (DxOMark’s idea of the lower limit of a “good” visible noise level), you can see that the D700 has a 1/3 stop lower “useable” ISO before reaching this noise quality limit. This is probably mostly caused by its the lower QE, since all 3 cameras have same-sized sensors.

      2. What I’m reading here is that there isn’t really a predictable pattern as to how a sensor might behave. With all these params it looks like pixel pitch and QE are the major contributors to performance.

  2. “It ACCOUNTS for the difference in MP and sensor area and ADJUSTS for it, but it doesn’t REWARD more MP.”

    While normalization accounts for the differences in output image size, there is an unintended side effect of favouring more megapixels. It may not be the intention of dxomark but it certainly is real. That is something that can’t be avoided and something that should be taken into account when choosing a camera.

    “We look at the whole image either on a screen or in a printout, rather than spending time close up doing 100% pixel peeps.”

    True. We take into consideration how big we actually want to print when we decide on a camera. If you want to print at 12×18 max then 12Mp is more than enough. If you want something larger then maybe you should go with 36Mp. But you have to understand that the performance of your chosen camera can only be really tested when you do print them at the INTENDED SIZE that you bought them for. My point is better understood when we start using extreme examples like a 12Mp point and shoot and a 12Mp D700. If we print at 4×6 then they would be indistinguishable from each other. At 12×18, the D700 will show a significant improvement in quality. Now if you were to compare a D700 and D800 then it makes sense to compare them at their intended print sizes, say 12×18 vs 24×36. Anything smaller than that and they become indistinguishable from each other as you can see from the dxomark graphs of the 8×10 prints. I would even think that at 12×18 which is the intended print size of the D700, the D800 has no advantage whatsover. Every time you print at a size smaller than the maximum capability of your camera, you are basically wasting it. The point is that the insane resolution of the D800 is practically just a waste of resources at print sizes of 12×18 or smaller. For most people it means wasting all the time. Downsampling is really just a clever way to make use of the otherwise wasted image real estate. It was never the intention of having such a huge resolution.

  3. “While normalization accounts for the differences in output image size, there is an unintended side effect of favouring more megapixels.”

    I can’t agree with that. A smaller sensel pitch (more MP on the same-sized sensor) using have a smaller FWC. Nobody is disputing that:

    from http://www.clarkvision.com/articles/digital.sensor.performance.summary/#full_well

    But since there are more of them, their individual lower noise performance is not the end of the story, since image noise reduces as more MP are combined in an image. Ideally, for the same fill-factor, both tendencies should counter each other, so you’re left with the computational and sampling benefits I’ve already mentioned for more MP, as long has you have enough CPU power and storage speed/capacity to handle the extra data.

    So I don’t think of normalisation as favouring more MP, at all. It’s really bypassing the issues of how many MP and how big a sensor, and just comparing both sensors at a similar output size. How is that favouring more MP?

    I don’t believe everyone gets a camera with more MP just to print out bigger and bigger prints. John Sheehy has calculated that to avoid resolution loss and demosaicing artifacts in all the colour channels of FF-sized sensor, with high-quality glass, the sensor should be over 100MP. The idea is to capture the lens output with the minimum fingerprint added by the sensor.

    I don’t believe 12MP FF sensors were the end of the line as far as image quality is concerned.

    1. What would the camera manufacturers do with 100Mp? If the D800 is any indication it looks like the consumers are left alone to figure out what to do with all those megapixels.

      In terms of max resolution, the 16Mp D4 has managed to meet the noise performance of the D700 after “only” 7 years of research and development. I did not imply that 12Mp is a dead end but it certainly set the bar high enough that even the most modern sensors still can’t surpass it.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s