The Many Faces of Lake Moogerah

I discovered Lake Moogerah by accident. I was driving towards Warwick, a city located southwest of Brisbane, when I stumbled upon this magical place. Since then, I have been camping and taking photos of the location. The spot never disappoints. I would always find something new every time I visit.

For the past two weeks I have shot Lake Moogerah twice and I could not help but wonder how quickly it changes. There is no better way to show that than by giving sample shots of this beautiful place.

moogerah3

That’s Lake Moogerah during sunset when the sun is just kissing the horizon. A few minutes later, the warm light is replaced by fiery clouds:

moogerah2

If you stayed until it gets dark and waited for the moon to rise, you’ll get warm light again. This one is when the moon is just above the horizon:

moogerah1

At close to 11PM when the moon is high above the sky, you’ll get much cooler colours and it looks something like this:

moogerah4

Notice the stars hiding behind the clouds. 🙂

And that’s Lake Moogerah in four shots.

Advertisements

Easiest Way to Get a Good Shot

ship

Here is a very simple tip if you want to capture nice photos: find ONE subject and isolate it from everything else. That’s it.

Why do you think that shallow depth-of-field portrait shots look nice? It’s not just because of the creamy/blurry background but because shallow DoF isolates the subject from any background distraction. If the background is simple and non-distracting you do not need shallow DoF to get a good portrait shot. Studio shots, where the photographer has full control of the environment, are normally shot at f/5.6 or f/8 or even f/16 because the subject is already isolated.

P1160162-small

The main reason why n00bish shots look crap is because beginners tend to cram everything into the frame. This one goes especially to the n00b landscape photographers who would sell their kidneys just to get the widest lens possible. They want it ultra-mega-wide so they could include EVERYTHING in the frame. That’s the quickest way to get a crappy shot. STOP.

boat

Find a subject that you like and have a really good look at it then ask yourself: what is it with this subject that I really like? Is it the entire subject or just some parts of it? Is it because the subject is in a particular environment? If you can’t answer those simple questions then your shot will look crap.

jetty

Once you find your subject, concentrate on it. Isolate it from everything. You may have to zoom in or get closer to your target. Do everything you can to single out the subject then take the shot. Now check your LCD and assess if you like your framing. If you think that it’s too empty or too simple then find something that will complement the subject. Zoom out or get into a different angle. Just make sure, when you do want to include more elements in the frame, that they will enhance the subject and NOT conflict with it.

three-sisters

So again, the quickest way to get a nice shot is to pick ONE subject and make sure that nothing else is in the frame. Go out and try it. You’ll thank me.

Canon 5DS: Why I think it’s crazy

Fifty megapixels! Fifty! On a measly 35mm sensor!

Let me be very blunt about this. It’s a stupid idea.

How much resolution do you really need? Here’s a hint: how close do you have to sit in front of your 60″ full HD TV before you start noticing the individual dots? Have you EVER printed any of your shots as big as a 60″ screen? Do you know that a full HD TV is only 2Mp? And if you are not satisfied with full HD, how about a 60″ 4K TV with an effective resolution of only 8Mp?

Let me ask you, have you ever been to a drive-in movie? They are projecting standard definition movie into a very very big screen and yet we do not really notice the pixelation. That’s basically an image that is smaller than 2Mp “printed” as wide as an entire street block! I have never heard of anyone complaining about drive-in movie resolution.

What does 50Mp imply? It means that you will need more storage and more computer processing power. It also means that you can no longer just shoot without a tripod at the usual shutter speeds because even very minor movements become very obvious in your photos. Not only that. It also means that you need to be shooting with premium lenses. At 50Mp, you need a lens that peaks at f/5.6 at the minimum. In case you are unaware, you also need to shoot at f/5.6 or wider to fully utilise the entire 50Mp because shooting narrower than that results in a massive drop in effective resolution. So, if you can’t shoot at f/8 and beyond then the 5DS is really not meant for landscape photography. Some say that the 5DS is meant for studio work where you shoot portraiture and fashion. Who prints their portrait shots that big? If you do want to print at billboard sizes then you only need 2Mp (as in our drive-in movie example). 

It seems to me that manufacturers struggle to improve sensor performance so that they can increase the megapixel count. In the case of Canon, why don’t they concentrate on improving their mediocre sensors instead of engaging in the megapixel slugfest? 

It doesn’t make any sense but I’m sure somebody will find some good use for such massive images. Go ahead and buy the 5DS if it scratches your itch.

In Search of Sunlight

It’s been a while since I last posted anything in this blog. I have lots of excuses to back it up though: Firstly, there’s work that gets in the way when the weather seemed to be ideal for a photoshoot. Of the few times that the weather seemed to cooperate, I am assigned to do 24/7 on-call shifts 😦  Then there’s another hobby of mine that has been competing with photography: music. I was gigging around Brisbane before I shifted into photography. It was my day job that required me to fly all around Australia to conduct trainings and do consultancy work and it was because of this that I had to quit my band. Travel was taking its toll and I needed something that would sustain me and keep me excited. And so I decided to take photos. Photography betrayed my music and now it was payback time for my guitar.

When the Christmas holiday season started, I still could not shoot. I already had two gigs booked which required me to learn about 20 songs, most of which I have not heard and played before. But that’s over and, as always, the bad weather strikes again whenever I am free to shoot.

The weather forecast tells me it’s going to be stormy for at least 10 days. By then, my holiday break would soon be over. Heck, the year would soon be over. Yesterday I decided to make a suicide run.

I know it was going to be very gloomy so I had to pick a subject that would work well on overcast weather conditions. Water falls and creeks come to mind but I find them to be uncertain and dangerous especially with the non-stop rain. I chose to shoot flowers.

With my gummy boots and trusty weather-sealed Olympus EM-5 camera and 12-50 kit lens pair, I made a two-hour suicide drive into unknown weather conditions. The destination was a small town called Allora where I’m supposed to find sunflower fields. The Willy Weather iPhone app told me that rain is expected during the morning and afternoon so I started driving at 10AM hoping to get there by mid day. Mid day is usually bad for landscape photography but the overcast skies should give me the soft light that I needed for the flower shots.

I got there at exactly 12 noon but I could not find any sunflowers. There was a tourist drive called sunflower route but it seemed like they have already harvested the sunflowers. After an additional 10kms of driving I finally found acres of sunflower fields. What’s really surprising was that this field was unfenced. It is quite rare here in Australia to have something like this that is totally unfenced and I did not see any “No Trespassing” sign anywhere. I parked along the shoulder road and started framing shots. After about 20 frames, I called it a day and started the long drive home.

EM590225-framed

EM590234-framed

EM590246-framed

EM590252-framed

Such is the beauty and frustration of landscape photography. You go into the unknown hoping that you would return with some decent shots. In my case, a four-hour drive and a late 3PM lunch got me four frames that I thought were good enough. No, I am not really happy about them but this is better than nothing. I haven’t shot for a few months and I needed to break the spell.

That’s it for me. (Belated) Merry Christmas and may all of you have a prosperous 2015!

Understanding the Effects of Diffraction (Part 2)

This article is a continuation of my previous post on understanding the effects of diffraction. That article has caused a long-winded discussion because some people decided to go deeper in the discussion of diffraction without fully understanding some fundamental concepts. Add to that some bogus resolution graphs and the discussion went from bad to shite.

In the interest of further learning, let’s go back to the very basic principles behind lenses and light.

LENS ABERRATION

The main purpose of a photographic lens is to focus light into the camera’s sensor. Ideally, an incoming point light source is projected into the sensor as a point. The reality is not quite that simple. Light rays near the center of the lens just pass straight through the glass without any problems. However, light rays that do not pass through the center will have to bend so as to meet with the other light rays towards the same focal point. The farther the light ray is from the center, the sharper it has to bend. The problem here is that lenses are not perfect. These imperfections or aberrations result in imprecise bending of light. Light rays near the edges of the glass don’t quite hit the focal point. Some of them will fall just before the sensor and some of them will fall after the sensor. The point light source then is projected into the sensor no longer as a point but something that is much larger. Refer to the simple illustration below. The red ray hits the focal point, the blue ray almost hits the focal point but the green ray which is very near the edge totally misses it.

Screen Shot 2014-10-21 at 8.29.24 pm

There are ways to work around lens aberrations. The most common method is by closing down the pupil to eliminate light rays that are near the edges of the lens. In photography, this is what happens when you close down or “stop down” the aperture. In the illustration below, the narrow pupil has eliminated the out-of-focus green ray leaving only the red and blue rays that are more focused.

Screen Shot 2014-10-21 at 8.27.30 pm

The result is a smaller projected point that is truer to the original point source. The overall image that is projected into the sensor will look sharper. The lens’es performance has therefore improved by utilising only the center of the glass by closing down the pupil. The downside though is that since the pupil has eliminated other light rays, the resulting image will also look darker. Bottom line is that you will have to trade sharpness with brightness.

DIFFRACTION

As discussed above, closing down the pupil improves the performance of the lens. You can make the pupil as narrow as you want and the lens performance will improve proportionally.

There is a problem though that is not quite the fault of the lens itself. This problem is attributed to a property of light. Light changes direction when it hits edges or when it passes through holes. This type of change of direction is called diffraction. Diffraction is ever present as long as there is something that is blocking light. So although a narrower pupil improves lens performance, light goes out-of-control when it passes through a narrow opening. The narrower the pupil, the more that light changes direction uncontrollably. It’s like squeezing a hose with running water. The tighter you squeeze, the wider the water spreads. In the end, light rays will still miss the focal point and we are back to the same dilemma where our point light source is now projected at a much bigger size on the sensor.

DIFFRACTION-LIMITED LENS

We are now ready to understand what a diffraction-limited lens means.

Recall that depending on the size of the pupil, light rays that are farther away from the center of the lens will miss the focal point thus causing a point light source to be projected much larger on the sensor. Let’s assume for now that this point source is projected with a much larger diameter, X, on the sensor.

Now forget for a moment that the given lens has problems and is perfect with no aberrations whatsoever. Recall that at the same pupil size, light diffracts (spreads) in such a way that will cause some of the light rays to miss the focal point and again resulting in a larger projected point of diameter Y.

So now we have two different sizes of the projected point: size X caused by lens aberrations and size Y caused by diffraction (assuming that the lens was perfect).

If X is smaller than Y then the lens is said to be diffraction-limited at that pupil size or aperture. This means that the main contributor to image softness is diffraction instead of lens imperfections. The optimum performance of the lens is the widest aperture in which X remains smaller than Y. Simple.

If X is larger than Y, the problem becomes a bit more complicated. It means that lens imperfections are more dominant compared to diffraction and therefore you can choose to make the aperture narrower to improve lens performance. Stopping down will of course decrease X but will increase Y. It becomes a delicate balancing act between lens imperfection and diffraction. This is a common problem with cheap kit lenses. At larger apertures, kit lenses have aberrations so bad that the image they produce look soft. So you stop down to f/8 or f/11 and by then diffraction kicks in causing the image to soften. It’s a lose-lose situation. That is why premium lenses are expensive. They are sharp wide open where diffraction is very negligible.

A lens that is diffraction-limited at f/5.6 is considered very good. A lens that is diffraction-limited at f/4 is rare. A lens that is diffraction-limited at f/2.8 is probably impossible.

Let’s summarise the discussion:

1. Lenses are not perfect. Aberrations will cause the light rays to miss the focal point thus resulting in loss of sharpness.
2. Lens performance improves as you stop down the aperture.
3. Diffraction is a property of light that forces it to change direction when passing through holes. This causes light rays to miss the focal point thus resulting in loss of sharpness.
4. Diffraction is always present and worsens as you stop down the aperture.
5. A lens is diffraction-limited at a given aperture if the effects of aberrations are less pronounced compared to the effects of diffraction at that aperture.

That’s it for now. In the next article, we will discuss the effects of lens aberrations and diffraction on sensors.

Understanding the Effects of Diffraction

This post is a continuation of the previous article that I wrote about resolution and diffraction. I highly suggest that you read that one first so that you will gain a basic understanding of these concepts.

One thing that a lot of people still fail to understand is the absolute effect of diffraction on image resolution. A common argument of buying a higher megapixel camera is that it would “always” resolve more detail than a lower megapixel camera. That is true but only until you hit the diffraction limit. For example, a full frame camera shot at f/16 will not resolve any detail higher than 8Mp. That is, a 36Mp D800 will not give more details compared to a 12Mp D700 when both are shot at f/16. They both will have an effective resolution of 8Mp only.

To explain this, let us consider a very simple analogy. Notice that when you are driving at night in complete darkness, it is very difficult to distinguish if an incoming vehicle is a small car or a big truck if you were to judge only by their headlights. This is because the apparent separation between the left and right headlights is very dependent on the distance of the vehicle from your position. The headlights seem to look larger and closer together the farther the vehicle is from you. If the vehicle is far enough, both headlights will seem to merge as if there is just one light and you would think it’s a bike instead of a car. The reason is simple: light spreads. Both left and right headlights spread until they seem to merge and by then they become indistinguishable from each other. Diffraction is the same. Diffraction spreads light and you lose the details. Therefore it doesn’t matter if you have two eyes or eight eyes like a spider, you still won’t be able to distinguish two separate headlights if the incoming vehicle is very far. In this case, eight eyes are no better than two eyes. Both sets of eyes still see only one headlight not two. Think of the “number of eyes” as your sensor resolution. It does not matter if you have 8Mp or 2Mp, both cameras will detect only one headlight. Did the 8Mp lose resolution? No. It remained a 8Mp sensor. Did it manage to detect two headlights? No. Therefore in our example, a 8Mp is no better than 2Mp in resolving the number of headlights.

The point is that diffraction destroys details. When there is nothing to resolve, sensor resolution does not matter. Supposing that you have two lines that are very close together, diffraction will spread both lines such that they will appear to merge as if they are just one big line. If you only have one line to resolve it does not matter if you have a 2Mp camera or a 100Mp camera, both will detect only one line. The 100Mp camera will of course have more samples of that single line but it is still just one line. Diffraction does not affect sensor resolving power but it affects how the subject is presented to the sensor. Diffraction blurs the subject in such a way that it limits what the sensor can fully detect.

With that in mind, let us look at practical examples. For a full frame sensor, diffraction at f/8 is enough to blur the subject such that anything higher than approximately 30Mp will not resolve any more details. For each stop, the effective resolution drops by half so at f/11 the limit is 15Mp and at f/16 it’s 8Mp and at f/22 a measly 4Mp. These numbers are just approximations and assume that you have a perfect lens. The reality is much lower than those values.

How about smaller sensors like APS-C or m43? The decrease in resolution is proportional to the crop factor. So an APS-C shot at f/8 will only have a maximum effective resolution of 15Mp while m43 will have 8Mp and so on.

Here are MTF graphs for a Nikon 50/1.4 lens comparing a 16Mp D7000 (crop sensor) with a 36Mp D800 (full frame) at f/5.6 and f/16 respectively. Notice that the resolution at those settings are very similar.


So what are the implications? If you are a landscape photographer with a 36Mp Nikon D800 and you shoot at f/8 or f/11 or maybe f/16 to gain enough depth of field you are basically wasting disk space. At f/8, your 36Mp sensor is no better than a 30Mp sensor. At f/11 it’s no better than a 16Mp D4. At f/16 it is no better than a very old 12Mp D700. So a 36Mp sensor shot at small f-stops is not able to capture enough details and yet the image size remains the same and consumes 36Mp of disk space. If you shoot at f/16 for example, you are better off shooting with a 12Mp D700. If you want to print as big as a 36Mp camera then upsize your 12Mp image in Photoshop to an equivalent of a 36Mp image. Of course the upsized image will not gain any details but it doesn’t matter because the 36Mp hasn’t resolved any more details anyway.

A related analogy is that of scanning photos. Good prints are usually done at 300dpi. When scanning photos, it does not make sense if you scan higher than that because you won’t gain anything. Scanners are capable of 4800dpi or even 7200dpi and maybe higher. If you scan a print at 7200dpi you will get a really huge image but with no more detail than when you scanned it at 4800dpi or lower. You could have just scanned it at 600dpi and you won’t notice any difference. The 7200dpi scan is a waste of time and disk space.

Another common argument is that a sensor with lots of megapixels allows more cropping possibilities. Again, that is true only if you are not diffraction limited. Otherwise you could just shoot with a lower Mp camera, upsize the image and then crop and it will make no difference in terms of details.

This is why I have absolutely no interest in the D800 and other insanely high Mp APS-C cameras like the D7100 and K-3 and A6000. I shoot mostly landscape. I stop down to f/11 and sometimes even to f/22. At those f-stops these cameras are just a waste of space, time and processing power. Again, a 36Mp full frame camera does not make sense unless you shoot mostly wide open at f/5.6 and wider. A 24Mp APS-C is stupid unless you mostly shoot at f/5.6 and wider. Manufacturers keep increasing sensor resolution instead of improving noise performance because most photographers are gullible. Megapixels sell.

Having said that, do not be afraid to shoot at smaller f-stops if the shot calls for it. Even 4Mp effective resolution is a lot if you print at reasonable sizes. And since most people never print at all, 4Mp for web viewing is GIGANTIC!

For a more comprehensive explanation of the effects of diffraction refer to this article: http://www.luminous-landscape.com/tutorials/resolution.shtml

Shoot and shop wisely. 🙂

Debunking Equivalence

What is equivalence? If you haven’t heard of this term used in photography before then don’t bother; you didn’t miss anything. (Part two is here)

If you are curious though, it simply means that different formats or sensor sizes require different settings in order to produce “the same” or equivalent images. Usually, equivalence proponents use the 35mm full frame sensor as the “reference standard”. For example, for a m43 sensor and full frame sensor to have the same angle of view (AoV), the m43 will have to use a 25mm lens and the full frame a 50mm lens because the m43 sensor is smaller; four times smaller to be exact. It doesn’t end there. Since a 25mm lens has a shorter focal length compared to a 50mm there will be differences in depth of field (DoF). The shorter 25mm will have to shoot at f/4 to get the same DoF as a 50mm at f/8.

There are other “parameters” involved in this “equivalence”. For more details, refer to this article in dpreview: http://www.dpreview.com/articles/2666934640/what-is-equivalence-and-why-should-i-care

That dpreview article is funnily entitled “What is equivalence and why should I care”. Should you really care about equivalence? Most photographers don’t care about equivalence. Real photographers’ shooting techniques vary depending on the camera that they brought with them. Give a photographer a mobile phone and he will capture fantastic images without pretending that he is carrying a DSLR. I own a mobile phone, several point-and-shoot cameras, a few m43’s, an APS-C and full frame cameras. I know exactly what each one of them are capable of doing and I shoot accordingly. I don’t expect shallow DoF with my iPhone so every time I shoot portraits with it I need to be careful that the background does not distract from the main subject. Here is an example of how you can capture professional-looking portraits with a simple iPhone 3GS: https://fstoppers.com/editorial/iphone-fashion-shoot-lee-morris-6173.

Bottom line is, gear does not matter. If gear does not matter, equivalence does not matter.

But let’s not stop there. There is more to that equivalence article. To be precise, there are a lot of incorrect information in that article that are very misleading if you are not careful. The biggest misinformation that equivalence proponents spread in forums is that of “total light captured”. I will try to debunk equivalence in the next few paragraphs.

For the sake of example, let’s compare a m43 and a full frame (FF) sensor. By now you should already be aware that a FF sensor is four times larger than a m43 sensor. The m43 crop factor is therefore 2x. It follows that to shoot “the same image” we will have to use different lenses and use different f-stops like so:

m43: 25mm at f/5.6
FF: 50mm at f/11

This will result in the same AoV and DoF. Now what about the rest of the exposure triangle? This is where equivalence-fu starts becoming really stupid. The proponents insist that you could use the same shutter speed for both m43 and FF and still arrive at the same image. They insist that the same shutter speed must be used so that both images will result in the same “blurring” due to subject motion (ROFL!!!). The example above then becomes:

m43: 25mm, f/5.6, 1/125s
FF: 50mm, f/11, 1/125s

Wait, doesn’t that underexpose the FF image? Indeed it does. By two stops, to be exact! Didn’t I say it was stupid? In what world do two images, two stops apart, are considered “the same”? One is obviously darker. Much darker. Equivalence proponents must have something up their sleeves 🙂 You probably guessed it already. They say that you can bump up the ISO of the full frame shot so that it will be of the same brightness as the m43 shot! So now the example becomes:

m43: 25mm, f/5.6, 1/125s, ISO 100
FF: 50mm, f/11, 1/125s, ISO 400

Seriously?!!! Let’s be very clear about this. Bumping up the ISO does not increase light. ISO has absolutely no effect on exposure. Learn about that here. So why do you think that equivalence-fu proponents are suggesting that this ISO bump will make both images equivalent? Their reasoning is quite simple and stupid: because both sensors have gathered “the same total amount of light”!!! Recall that each stop of exposure means twice the amount of light. Since a m43 sensor is four times smaller than a FF sensor it means that underexposing the FF by two stops (4x amount of light) will still result in the same TOTAL light captured by each sensor. If that isn’t stupid then I don’t know what is.

Let’s discuss this further by using a simple experiment. Supposing that we have a m43 camera and we shoot a scene using a 25mm lens. We can produce a full frame equivalent image of the same scene with the same AoV using the same m43 camera by stitching four shots from a 50mm lens. Refer to the illustration below:

Screen Shot 2014-09-28 at 10.31.50 pm

As you can see, the smaller single shot image captured with a 25mm lens will look exactly the same as the larger stitched image which is equivalent to what a full frame sensor would have captured. The narrower AoV of the 50mm lens means that we need four shots stitched side by side to arrive at the same AoV as the 25mm shot. Again, this shows that a FF sensor is four times larger than a m43 sensor. Same AoV, same DoF but different image sizes due to the different sensor sizes.

Now let’s be stupid for a while and assume that equivalence is correct 🙂 In order for the single shot image and the stitched image to have the same total amount of captured light, we will have to underexpose by two stops, each of the four individual shots that we used to stitch the larger image. Since these four images are now much darker we will have to bump their ISO by two stops to arrive at the same brightness as the single shot image. At this point, we now have two “equivalent” images: the smaller, properly exposed m43 image and a larger full frame image that was produced by stitching four underexposed m43 shots.

Common sense will tell you that the larger stitched image is every bit inferior to the single shot image. Two stops inferior to be exact. If you sample a quarter chunk of that larger image it will always turn out much worse than the reference m43 shot. Take a quarter chunk from the top, bottom , sides, or center and every single one of them will look much much inferior to the original properly exposed m43 shot. We can therefore say that the larger image is inherently much inferior compared to the single shot m43 image. So how can equivalence proponents honestly say that the underexposed FF shot is “the same” as a properly exposed m43 shot? You don’t need half a brain to realise that this is plainly stupid.

The stupidity does not stop here though. The equivalence-fu followers have something else to support their “theory”. They suggest that if you print or view the smaller properly exposed m43 image and the larger severely underexposed FF image at the same size, they will look exactly the same. Well maybe they would look the same up to a certain extent. Recall that when you view or print an image at a smaller size than its original size then the effects of downsampling will take effect and will result in a lesser perceived noise: https://dtmateojr.wordpress.com/2014/05/19/megapixel-hallucinations/. This, however, has absolutely got nothing to do with light gathering. As we have shown in our example, the underexposed FF image is much much darker than the reference m43 image if it were not for the ISO bump. Equivalence proponents are using image size to circumvent the destructive effects of underexposure and they think that image size and light are one and the same. Image size has got nothing to do with light. A 41Mp Nokia phone camera has a larger image size compared to a full frame 36Mp D800 although the former has much much lesser total amount of light captured. This is why if you are not careful these equivalence-fu “photographers” will easily mislead you.

Let’s take this circus show to a higher level. Assume that total light and image size are equivalent and related. In that case, we could, in a sense NOT increase the ISO of the underexposed full frame image but instead downsample it to the same size as the m43 image and they should result in the same brightness, right? Simply because the same total amount of light has now been projected into the same image area which should result in the same exposure (total light over total area). But we know that this doesn’t work because downsampling or upsampling has no relationship to total light and that is why the downsampled FF image remains two stops darker. So how could equivalence proponents honestly equate total light and image size? :-O

So now we know that equivalence-fu relies on resampling to work around underexposure. Does this always work? No, it doesn’t. If you recall the discussion in the “Understanding Exposure” article that was linked above, bumping up the ISO does not increase light. It only increases gain. The analogy was that of the process of boiling water. Increasing ISO is like boiling water. Boiling pushes water to the top of the container but it does not increase the amount of water. If you underexpose, you will come to a point where there is no more light being captured. It’s like a container with no water. Bumping the ISO or boiling a container that does not contain water does absolutely nothing. Image noise is more pronounced in darker areas. Underexposure will only worsen the noise in those darker areas. When you have no signal, there is nothing to resample. Downsampling will not always save you.

The nasty effects of bumping up the ISO can not be ignored. Increasing the ISO will also result in hot pixels, banding and other nasty artifacts. Why do you think are cameras limited by how high you can set the ISO sensitivity? Why can’t we not bump the ISO indefinitely? Because the truth is, high ISO sucks regardless of sensor size. Imagine an ISO 6400 shot from a m43 Olympus E-M5 compared to an ISO 25600 shot from a full frame Nikon D800. How much worse does it get if you now compare a point-and-shoot camera with 5x crop factor to that D800. Five stops underexposure is A LOT and really bad. I mean really, try underexposing a night shot on your D800 by 5 stops then bump it up in Photoshop. Crash and burn baby!

If you think that’s bad then consider shooting with slide film. How big is a sheet of film for a 8×10 view camera compared to a measly 35mm camera? For the sake of argument let’s just say that the size difference is 5x. Do you really believe that if I shoot Fuji Velvia on 35mm and then I underexpose Velvia on the 8×10 camera by five stops and push it during development that the images will look “the same”? If this was negative film then maybe you can get away with it but don’t even attempt that kind of circus act with slide film. Slide film is very unforgiving when it comes to exposure. Five stops is almost the entire usable dynamic range of slide film!!! If a photographic “theory” fails miserably with film then that “theory” is simply wrong. In the case of equivalence, it’s bullshit, plain and simple.

So to answer that dpreview article’s question: “should you care about equivalence?”. Not if it’s wrong and stupid.

Update:

I can’t believe that people keep on spreading this nonsense. Here’s another funny equivalence-fu fauxtographer: equivalence for embeciles

Examine his illustration on the effect of different apertures f/8 and f/4. He is totally oblivious to the effect of focal length on light intensity. Note that although f/8 and f/4 here have the same physical aperture size, the longer focal length of the f/8 lens causes the light to be projected much wider into the sensor. The net effect is that each sensel behind the longer f/8 lens receives much lesser number of photons than the sensels behind the shorter f/4 lens. The result is underexposure which is seen as a darker image. Two stops (or 4x light) of underexposure to be exact. This obviously corresponds to noisier sensel output and therefore noisier image.

How can two images with different exposures be equivalent?! Such an idiotic explanation is the result of the epic failure to understand very basic photography. Exposure is totally independent of sensor size. The same f-stop results in the same total number of photons per sensel regardless of imaging format. Always. Same f-stop means same exposure meaning the same brightness.

Not your typical photography blog.