Debunking Equivalence

What is equivalence? If you haven’t heard of this term used in photography before then don’t bother; you didn’t miss anything. (Part two is here)

If you are curious though, it simply means that different formats or sensor sizes require different settings in order to produce “the same” or equivalent images. Usually, equivalence proponents use the 35mm full frame sensor as the “reference standard”. For example, for a m43 sensor and full frame sensor to have the same angle of view (AoV), the m43 will have to use a 25mm lens and the full frame a 50mm lens because the m43 sensor is smaller; four times smaller to be exact. It doesn’t end there. Since a 25mm lens has a shorter focal length compared to a 50mm there will be differences in depth of field (DoF). The shorter 25mm will have to shoot at f/4 to get the same DoF as a 50mm at f/8.

There are other “parameters” involved in this “equivalence”. For more details, refer to this article in dpreview:

That dpreview article is funnily entitled “What is equivalence and why should I care”. Should you really care about equivalence? Most photographers don’t care about equivalence. Real photographers’ shooting techniques vary depending on the camera that they brought with them. Give a photographer a mobile phone and he will capture fantastic images without pretending that he is carrying a DSLR. I own a mobile phone, several point-and-shoot cameras, a few m43’s, an APS-C and full frame cameras. I know exactly what each one of them are capable of doing and I shoot accordingly. I don’t expect shallow DoF with my iPhone so every time I shoot portraits with it I need to be careful that the background does not distract from the main subject. Here is an example of how you can capture professional-looking portraits with a simple iPhone 3GS:

Bottom line is, gear does not matter. If gear does not matter, equivalence does not matter.

But let’s not stop there. There is more to that equivalence article. To be precise, there are a lot of incorrect information in that article that are very misleading if you are not careful. The biggest misinformation that equivalence proponents spread in forums is that of “total light captured”. I will try to debunk equivalence in the next few paragraphs.

For the sake of example, let’s compare a m43 and a full frame (FF) sensor. By now you should already be aware that a FF sensor is four times larger than a m43 sensor. The m43 crop factor is therefore 2x. It follows that to shoot “the same image” we will have to use different lenses and use different f-stops like so:

m43: 25mm at f/5.6
FF: 50mm at f/11

This will result in the same AoV and DoF. Now what about the rest of the exposure triangle? This is where equivalence-fu starts becoming really stupid. The proponents insist that you could use the same shutter speed for both m43 and FF and still arrive at the same image. They insist that the same shutter speed must be used so that both images will result in the same “blurring” due to subject motion (ROFL!!!). The example above then becomes:

m43: 25mm, f/5.6, 1/125s
FF: 50mm, f/11, 1/125s

Wait, doesn’t that underexpose the FF image? Indeed it does. By two stops, to be exact! Didn’t I say it was stupid? In what world do two images, two stops apart, are considered “the same”? One is obviously darker. Much darker. Equivalence proponents must have something up their sleeves 🙂 You probably guessed it already. They say that you can bump up the ISO of the full frame shot so that it will be of the same brightness as the m43 shot! So now the example becomes:

m43: 25mm, f/5.6, 1/125s, ISO 100
FF: 50mm, f/11, 1/125s, ISO 400

Seriously?!!! Let’s be very clear about this. Bumping up the ISO does not increase light. ISO has absolutely no effect on exposure. Learn about that here. So why do you think that equivalence-fu proponents are suggesting that this ISO bump will make both images equivalent? Their reasoning is quite simple and stupid: because both sensors have gathered “the same total amount of light”!!! Recall that each stop of exposure means twice the amount of light. Since a m43 sensor is four times smaller than a FF sensor it means that underexposing the FF by two stops (4x amount of light) will still result in the same TOTAL light captured by each sensor. If that isn’t stupid then I don’t know what is.

Let’s discuss this further by using a simple experiment. Supposing that we have a m43 camera and we shoot a scene using a 25mm lens. We can produce a full frame equivalent image of the same scene with the same AoV using the same m43 camera by stitching four shots from a 50mm lens. Refer to the illustration below:

Screen Shot 2014-09-28 at 10.31.50 pm

As you can see, the smaller single shot image captured with a 25mm lens will look exactly the same as the larger stitched image which is equivalent to what a full frame sensor would have captured. The narrower AoV of the 50mm lens means that we need four shots stitched side by side to arrive at the same AoV as the 25mm shot. Again, this shows that a FF sensor is four times larger than a m43 sensor. Same AoV, same DoF but different image sizes due to the different sensor sizes.

Now let’s be stupid for a while and assume that equivalence is correct 🙂 In order for the single shot image and the stitched image to have the same total amount of captured light, we will have to underexpose by two stops, each of the four individual shots that we used to stitch the larger image. Since these four images are now much darker we will have to bump their ISO by two stops to arrive at the same brightness as the single shot image. At this point, we now have two “equivalent” images: the smaller, properly exposed m43 image and a larger full frame image that was produced by stitching four underexposed m43 shots.

Common sense will tell you that the larger stitched image is every bit inferior to the single shot image. Two stops inferior to be exact. If you sample a quarter chunk of that larger image it will always turn out much worse than the reference m43 shot. Take a quarter chunk from the top, bottom , sides, or center and every single one of them will look much much inferior to the original properly exposed m43 shot. We can therefore say that the larger image is inherently much inferior compared to the single shot m43 image. So how can equivalence proponents honestly say that the underexposed FF shot is “the same” as a properly exposed m43 shot? You don’t need half a brain to realise that this is plainly stupid.

The stupidity does not stop here though. The equivalence-fu followers have something else to support their “theory”. They suggest that if you print or view the smaller properly exposed m43 image and the larger severely underexposed FF image at the same size, they will look exactly the same. Well maybe they would look the same up to a certain extent. Recall that when you view or print an image at a smaller size than its original size then the effects of downsampling will take effect and will result in a lesser perceived noise: This, however, has absolutely got nothing to do with light gathering. As we have shown in our example, the underexposed FF image is much much darker than the reference m43 image if it were not for the ISO bump. Equivalence proponents are using image size to circumvent the destructive effects of underexposure and they think that image size and light are one and the same. Image size has got nothing to do with light. A 41Mp Nokia phone camera has a larger image size compared to a full frame 36Mp D800 although the former has much much lesser total amount of light captured. This is why if you are not careful these equivalence-fu “photographers” will easily mislead you.

Let’s take this circus show to a higher level. Assume that total light and image size are equivalent and related. In that case, we could, in a sense NOT increase the ISO of the underexposed full frame image but instead downsample it to the same size as the m43 image and they should result in the same brightness, right? Simply because the same total amount of light has now been projected into the same image area which should result in the same exposure (total light over total area). But we know that this doesn’t work because downsampling or upsampling has no relationship to total light and that is why the downsampled FF image remains two stops darker. So how could equivalence proponents honestly equate total light and image size? :-O

So now we know that equivalence-fu relies on resampling to work around underexposure. Does this always work? No, it doesn’t. If you recall the discussion in the “Understanding Exposure” article that was linked above, bumping up the ISO does not increase light. It only increases gain. The analogy was that of the process of boiling water. Increasing ISO is like boiling water. Boiling pushes water to the top of the container but it does not increase the amount of water. If you underexpose, you will come to a point where there is no more light being captured. It’s like a container with no water. Bumping the ISO or boiling a container that does not contain water does absolutely nothing. Image noise is more pronounced in darker areas. Underexposure will only worsen the noise in those darker areas. When you have no signal, there is nothing to resample. Downsampling will not always save you.

The nasty effects of bumping up the ISO can not be ignored. Increasing the ISO will also result in hot pixels, banding and other nasty artifacts. Why do you think are cameras limited by how high you can set the ISO sensitivity? Why can’t we not bump the ISO indefinitely? Because the truth is, high ISO sucks regardless of sensor size. Imagine an ISO 6400 shot from a m43 Olympus E-M5 compared to an ISO 25600 shot from a full frame Nikon D800. How much worse does it get if you now compare a point-and-shoot camera with 5x crop factor to that D800. Five stops underexposure is A LOT and really bad. I mean really, try underexposing a night shot on your D800 by 5 stops then bump it up in Photoshop. Crash and burn baby!

If you think that’s bad then consider shooting with slide film. How big is a sheet of film for a 8×10 view camera compared to a measly 35mm camera? For the sake of argument let’s just say that the size difference is 5x. Do you really believe that if I shoot Fuji Velvia on 35mm and then I underexpose Velvia on the 8×10 camera by five stops and push it during development that the images will look “the same”? If this was negative film then maybe you can get away with it but don’t even attempt that kind of circus act with slide film. Slide film is very unforgiving when it comes to exposure. Five stops is almost the entire usable dynamic range of slide film!!! If a photographic “theory” fails miserably with film then that “theory” is simply wrong. In the case of equivalence, it’s bullshit, plain and simple.

So to answer that dpreview article’s question: “should you care about equivalence?”. Not if it’s wrong and stupid.


I can’t believe that people keep on spreading this nonsense. Here’s another funny equivalence-fu fauxtographer: equivalence for embeciles

Examine his illustration on the effect of different apertures f/8 and f/4. He is totally oblivious to the effect of focal length on light intensity. Note that although f/8 and f/4 here have the same physical aperture size, the longer focal length of the f/8 lens causes the light to be projected much wider into the sensor. The net effect is that each sensel behind the longer f/8 lens receives much lesser number of photons than the sensels behind the shorter f/4 lens. The result is underexposure which is seen as a darker image. Two stops (or 4x light) of underexposure to be exact. This obviously corresponds to noisier sensel output and therefore noisier image.

How can two images with different exposures be equivalent?! Such an idiotic explanation is the result of the epic failure to understand very basic photography. Exposure is totally independent of sensor size. The same f-stop results in the same total number of photons per sensel regardless of imaging format. Always. Same f-stop means same exposure meaning the same brightness.


33 thoughts on “Debunking Equivalence”

  1. This article is simply too much unintentional comedy 🙂
    No, didn’t bother to read it, just glanced a bit.

    For example you wrote:

    “m43: 25mm, f/5.6, 1/125s
    FF: 50mm, f/11, 1/125s

    Wait, doesn’t that underexpose the FF image? Indeed it does”

    And that is just so silly thinking by you!

    Both images collect the same amount of light, so why would one image be “underexposed”? What is the “correct” exposure? How do you define it? Why would the FF in the above situation be “underexposed” if it collects the same light, which is what it does. If you were to use the same exposure values, the FF would collect four times more light, thus the image would have twice the signal-to-noise ratio, or in stupid people’s terms, it would be less noisy.

    Why the exposure centricness? What’s the point of it?

    Also, what is “exposure triangle”? (I hope you don’t think ISO is part of exposure, as it’s not.)

    Anyhow, two images can be considered to be identical when they have been drawn by the same amount of light and were taken with the same angle of view and shutter speed. That’s it. And a FF captures the same amount of light if the f-number is twice that of what m43 f-number is.

    If for example we have the above situation you mentioned the image data as sampled by the image sensors is essentially identical (or to be more correct, the information would be identical if the sensors had the same aspect ratio and same performance metrics like QE, readnoise and so on).

    Why is it so hard to accept equivalence as a fact of physics? You could easily test it yourself if you bothered – one one camera is needed. If you do not, then you’re just lazy and arrogant.

    Do you write this blog because you can then be in control of the “discussion” instead of having to argue with reason, evidence and logic as is the case if you were to discuss in certain photo foriums. Or is it because you prefer to have rather agressive and dismisive way with words, insulting people and so on?

    You really should participate in dialogue in the forums as it could benefit you – you’re lacking so much in knowledge and are relying in myths and gut feeling that it would do good for you to learn a bit. After that writing a blog might be sensible – right now it seems to be nothing more than a symptom of fragile mental health. I don’t mean to say you’re crazy or anything like that – I don’t think you are – but you’re severely lacking in ability to handle criticism and that is an issue which may cause you lots of trouble over your life. That is one reason why I think it would do good for you to try to do civilized discussion in for example DPR-forums. If you feel you’re anxiuos cause of the discussion, just take a break, create a new nick and start over again. I’ve done that many times and I’m getting along much better and can discuss much better nowdays and handle criticism without feeling it to be an attack on my person.

    I cincerely hope you take my advice.

    1. You said:

      “m43: 25mm, f/5.6, 1/125s
      FF: 50mm, f/11, 1/125s”

      “Both images collect the same amount of light, so why would one image be “underexposed”?”

      OMG! You don’t even understand basic photography. Have a look at the exposure comparison. One is f/5.6 the other is f/11. For someone who pretends to be a photographer you are BLIND!!! And that is why I am not going back to DPR. That forum is full of idiots like you.

      Can you show me your gallery instead?

      1. Haha. Downsampling does not increase the total amount of light. BUT, when binding pixels together, it is somehow similar to “put the light that each pixels captured into one pixel”, and therefore higher SNR. Hard to understand? Hmm… IQ issue.

      2. ROFL!!! Are you really a CMOS designer?! If you dump the light of several pixels into one pixel, wouldn’t that saturate and therefore overexpose that pixel? Bwahaha! You are getting dumber with every reply that you post.

      3. Here’s a simple question, which a CMOS designer should be able to answer:

        Supposing you capture a standard 18% grey card with a 36Mp D800. Now you downsample it to DXOMark’s standardised 8Mp print size. If you keep the same amount of light from 36Mp to 8Mp, wouldn’t the 8Mp print look brighter than 18% grey?

        The smaller print remains 18% grey though so where did the 28Mp of light go? So you think that downsampling has got anything to do with light?

        Mind-boggling if you are a stupid equivalence advocate.

      4. Oh my…didn’t you notice the phrase “somehow similar”? When binding pixels you’re dealing with information, which is converted from the amount of light, so no saturation problem at all. Do you even know how you camera works? Can’t believe I’m educating such an idiot XD

      5. I totally understand why you are so angry about this DPREVIEW article. So someone finally realized that this newly announced 40-150/F2.8 is actually an 80-300/F5.6 equivalent, wow! 880 grams and 1500 dollars!

        Canon 75-300/F4-5.6 is 480g and 250 dollars!

        Sigma provides a 180 dollars 550g option with 0.25X magnification! It scores 14MP sharpness on DXOMARK and the best record for M43 zoom lenses is just 10MP!

        O!M!G! so much stupidity tax!

        oh man, I cannot imagine how much it pains for M43 users to know the truth. It’s so brutal, just too much. I shouldn’t have been so mean to you. Please accept my apology. also my condolences XD

      6. Dude, if you are not dumb, f/2.8 is f/2.8 in terms of speed regardless of format. M43 has the reach, weight, size and DoF advantage. I own MF, FF, APS-C and m43 cameras and being small without sacrificing image quality has definite advantages. Go to my flickr page and see if you can pick which one is which. If you can shoot, gear does not matter. Those who can’t shoot compensate by buying the largest gear. Is that you? ROFL!

      7. oh that’s too easy, bring something harder.

        full well capacity of D800: close to 50k.

        just for convenience, let’s say 0.18%*50k = 10K

        how much is the strength of the shot noise? Poisson distribution, so 10K^0.5=100

        how about the circuit noise? less than 5, negligible, because (5^2+100^2)^0.5=100.125. welcome to the world of noise, where winner takes all


        now let’s merge 4pixels

        the total signal becomes 10k*4=40K

        the total noise becomes (100^2*4)^0.5=200


        wait, isn’t 40K too bright?

        stupid question. we merged 4 pixels, of course we need to divide the value by 4. or bump the full well capacity to 200K, does not matter, it’s just digital processing

        so now signal = 10K, noise = 50, still 46dB SNR


        what if we slice this nice sensor into 4 pieces? we got a M43 sensor, with 9MP resolution

        with the same F, same shutter, same ISO setting, you got 40dB for each pixel, 6db less than the D800 at 9MP output

        how to compensate for that? well you have to use ISO25. sadly, there is no ISO25 on you camera.

        but you know, if the FF is shooting at ISO400, at least you can use ISO100 to get a similar result.


        now I am surprised that I typed so much stuff to educate an idiot, to answer this super stupid question. I’m so patient LOL

      8. ROFL! So with all the calculations for full frame you still end up THROWING AWAY the excess light to keep the exposure. So it’s not really about total light, no? It’s more about interpolation and noise cancellation. It’s all about image size not total light.

        Print normalisation is nothing but viewing full print sizes from different sensor sizes at different distances. You view a 36Mp D800 from a farther distance compared to a 16Mp D7000 (both sensors are intrinsically identical). No need for your complicated math to explain the effects of total light because it’s not. It’s all about print/view magnification which has absolutely got nothing to do with light. Pffft!

    2. did I throw the light away? hmm…no. otherwise the SNR for each pixel will remain 40dB. But obviously, you have already thrown your brain away long time ago

  2. I think you misunderstood, that one also needs to boost ISO on FF to get to the same exposure for equivalence. It is clear from physics, that if you only double focal length and aperture, it requires ISO to be multiplied by crop^2 (four in this example due to the larger area). This is correctly stated on DPreview, or here: or or The difference then boils down to sensor generation, which should give you the same image in terms of S/N, DOF, AOV for the same sensor generation and optics. Advantage FF over m43, for example: you can increase S/N by lowering ISO and aperture for most lenses yielding cleaner images (with shallower DOF, of course).

    1. Yep. If you read the article it says:

      m43: 25mm, f/5.6, 1/125s, ISO 100
      FF: 50mm, f/11, 1/125s, ISO 400

      So yes, boosting the ISO is required to compensate for underexposure. If you read the article again (assuming you already read it) then it will explain why boosting the ISO (digital) or pushing development (film) does not make equivalence correct. Not one bit.

      1. Sorry, no. First, your example of using ISO 25600 on the D800, for example is moot, as you usually don’t go that far and is not even a native ISO. As I have D810, EM1 and Xt1, I never go larger than ISO 2000 on the m43, because noise is too much for my needs (for web size prints it is o.k.). Downsampling is fine, as you increase the light sensitive area (doesn’t matter that S/N ratio of individual pixel is lower, if read noise is not the limiting factor). I suggest you reading the more theoretical analysis here: comparing directly EM1 and D610, for example. Or DXOmark to compare pixel S/N ratio and S/N ratio after downsampling to 8Mpixel (increases S/N, of course), or the LL-article I provided in the linke before, written by a physicist (I’m one, too, but you obviously don’t believe me). Have a nice day, I provided enough information, that’s it for me.

      2. ISO 25600 is not moot. It’s proof that equivalence is stupid. It’s proof that total light alone with complete disregard for correct exposure is completely wrong. It’s no different to comparing m43 at ISO 100 and FF at ISO 400 since the latter is NOT base ISO as well. It’s like pushing film.

        And why are you ignoring read noise? Read “noise” has been ever present. Are you saying that equivalence only applies for special, totally unrealistic cases? Then it’s doubly wrong.

        Yeah, you too keep believing the myth and have a nice day.

      3. 1. In that scenatio both cameras collect the same amount of photons.
        2. Those photon counts are represented in the raw-file.
        3. If both cameras have the same pixel count and QE and read noise, etc. then the numeric information (not data, but information – they’re different) will be identical.
        4. The differences in QE etc. are in practise very small.
        5. In above situtation it is impossible to see in a blind test which output image was taken with which camera (ignoring optical differences.

        A though experiement example: we have 10MP sensors, FF and m43. “Perfect lens and sensor” – this simplifies the analysis. Let’s think that the very same photons hit lenses. What happens is the this lead to a situation where both cameras record absolutely identical data. Now, why would one of them need to be “boosted”?

      4. One word: exposure. More precisely, correct exposure. Something that you do not understand. Exposure is controlled by f-stop and time. Different f-stops, same time, result in different exposures. I won’t even try to educate you further.

  3. About noise sources: try – that’s by a physics professor who also has a hobby of writing raw-decoder. Or maybe you know better than him. Or better than the whole image sensor industry.

    Anyhow, typical image sensor of today has about 3 electron read noise per pixel. The noise of light, ie. photon shot noise is exactly the square root of the number of electrons in the signal in a pixel (ie. number of electrons excited by photons). Typical pixel captures between 20.000 and 100.000 photons, depending on the sensor (mostly about pixel size).

    Let’s imagine a 50k electron pixel (at ISO 100) – my camera has about that size and about 3 electron read noise. Now, at saturation point he shot noise would be 223.6 electrons, if we go down 10 stops from saturation (deep in the shadows at ISO 100, or saturation at ISO 102400) the signal is about 49 electrons, thus the photon shot noise is about 7 electrons. If the read noise is 3 electrons, then the total noise for the pixel is sqrt(7e^2 + 3e^2) = 7.6 electrons. (Noises add up in quadrature. Thus even this deep in shadows the read noise is not that important. For the whole image you need to add up the noises of pixels in quadrature, and the signals just by summing.

    Also, you seem to forget, that since the saturation capacity is largely a function of area, the full frame sensor has in priciple about 4 times larger capacity than the m43 has at the same ISO, and at four time higher ISO about the same capacity.

    Anyhow, are you willing to learn, or am I just stupid writing here?

    Anyhow, if you’re not improving, you will be a good source of many laughts all over the internet. Not fun for you I’m afraid.

      1. It actually proves everything, but you are either too stupid to understand, or just too much of a narcicist to be able to accept that you’re wrong.
        Or a bit of both as I think is the case.

        This blog has has to be the worst photoblog there is. I wonder if you are in a competition for that.

        Stupid boy narcicist unable to think and discuss.

  4. stupid article. as a CMOS designer let me tell you something: a FF CMOS at ISO 400 has a similar SNR as a M43 CMOS when both images are normalized to the same size(or resolution). So yes they’re equivalent. It is not the exposure but the total amount of light that determines the image quality. And yes the read noise at high ISO is negligible.

    1. As a CMOS designer I’m amazed that you completely missed the part where I explained why normalised print size has got nothing to do with light. Obviously being a CMOS designer does not guarantee sanity. Being a CMOS designer does not imply that you know photography as a whole any more than a lens designer does. Being a car designer does not guarantee that you know how to drive. Your comment is pointless. Dropping your credentials does not make you any more credible.

      For the record, photography flourished since the FILM days. You could buy Velvia and Ektar in just about any size. You exposed and developed them exactly the same way REGARDLESS of format. That makes your CMOS designer credentials completely irrelevant to this post.

      1. Unfortunately no one (at least not me) said downsampling will increase light, that’s just in your imagination. We are talking about increasing SNR. Totally different things.

        FF:F4+1/60+ISO6400+Downsampling to 16MP

        Same amount of light, similar level of SNR. And in most cases FF lenses have better sharpness. End of story.

        Equivalent theory is just a common sense for me and my colleagues. Basic physics, that’s all. I’m actually surprised that DPReview finally wrote something about that. And clearly it’s too much for some M43 users who had already paid their fair share of stupidity tax lol

      2. LOL! Yeah downsampling does not increase light so why do you attribute the increase in SNR to light?! OMG! Resampling and amount of light are totally independent of each other but you treat them as if they are one and the same. CMOS designers can be really stupid some times.

  5. So someone who was defending the image quality of M43 now claims that the image quality does not matter. Interesting.

    I don’t even bother to look at your flickr page. Even if you are really a good photographer, it does not change the fact that you are just a moron who claims that the Sun travels around the Earth, end of story. Your skill of photography has nothing to do with this, learn some logic man.

    1. I dare you to pick which shot was FF or m43. A moron is the one who claims to be a CMOS designer but totally misses the whole point of photography and therefore can’t shoot so he trolls the internet. I’m talking about you in case you missed it. 🙂

  6. I’m coming in here really late. This equivalence thing is getting really silly. The reason we even have it was to explain the differences between APS-C and FF which was established as 35mm. (Therefore medium format can be described as Super Frame?) I digress – OK – I get the non-equivalent EV of f/5.6 and f/11 both at 1/125 sec. What I can’t wrap my head around is the increase in ISO not affecting exposure. I understand we’re talking about “gain” since increases in ISO beyond base is an amplification of the signal. In FF example, one would have to slow the shutter speed to 1/30 to have the same EV as the m4/3 exposure. But, if you can’t decrease shutter time, one has to resort to increasing the value of ISO – introducing gain. Get technical. As an old EE, I can handle it.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s