Category Archives: Photography

Anything Photography Related

Debunking the Myth of Full Frame Superiority (Again)

 

FFvAPSC

Equivalence does not prove the superiority of full frame (FF) over crop sensor (APS-C, M43) cameras. In fact, equivalence shows that equivalent photos are equivalent in terms of angle of view, depth of field, motion blurring, brightness, etc…including QUALITY. Yes, equivalent photos are equivalent in quality.

Refer to the illustration above where we have a full frame lens (BLUE) on a full frame sensor (GREEN). Some full frame sensors are now capable of shooting in crop mode (APS-C) where only the area in RED is used. When a crop sensor LENS is used on a full frame sensor, only the area in RED is illuminated and the rest of the areas in GREEN are in complete darkness and therefore do not contribute to light gathering. This is also true when the full frame is forced to shoot in crop mode with a full frame lens; the camera automatically crops the image to the area in RED and the rest of the areas in GREEN are thrown away.

As per the illustration above, we can see that the central half of the full frame sensor is really just an APS-C sensor. If indeed, a crop sensor is inferior in terms of light gathering then logic will tell us that every center of a full frame shot will be noisier than the rest of the frame. We know this is not true. The light coming in from the lens spreads evenly throughout the entire frame. Total light is spread over total area. As a matter of fact, the central half is the cleanest because lenses are not perfect and become worse as you move away from the center.

Now suppose we have a full frame 50mm lens in front of a full frame sensor. Notice that the crop mode area (RED) does not capture the entire image that is projected by the 50mm lens. The angle of view is narrower than full frame (GREEN). There are several ways we can capture the entire 50mm view while using crop mode:

  1. move backward
  2. use a wider lens (approx 35mm)

Both methods allow the RED area to capture more of the scene. A wider scene means more light is gathered. It means that if we force the RED area (APS-C) to capture exactly the same image as the GREEN area (FF) we will be forced to capture more light! More light means less noise! In equivalent images, APS-C is actually cleaner than full frame!!!

For example, if we go with option #2 using a wider lens, equivalent photos would be something like this:

RED (APS-C): 35mm, 1/125, f/5.6, ISO 100
GREEN (FF): 50mm, 1/125, f/8, ISO 200

This is exactly what the equivalence theory proposes. The difference in f-stop is to ensure that they have the same depth of field given the same distance to subject. The faster f-stop for APS-C (f/5.6) guarantees that TWICE more light is gathered. Notice that the full frame is now forced to shoot at a higher ISO to compensate for the lesser light coming in due to a narrower aperture given by f/8. So if we are to use the same camera for both shots, say, a Nikon D810 to shoot in normal mode with a 50mm lens and in crop mode using a 35mm lens, the crop mode image will be noticeably better. In equivalent photos, crop mode comes out one stop better. In equivalent photos, the smaller sensor results in BETTER quality!!!

The story does not end here though. The full frame shot has twice the area of the crop mode shot. If both images are printed at the same size, the crop mode shot will need to be enlarged more than the full frame shot. Enlargement results in loss of quality and the full frame image will have an advantage over the crop mode image. Whatever the crop mode shot gained by the increase in gathered light is lost by a proportional amount during enlargement. In the end, both full frame and crop mode shots result in exactly THE SAME print quality!!!

Bottomline, full frame will not give you cleaner images than crop sensors, assuming that they are the same sensor technology (e.g. D800, D7000, K5). They will result in equivalent print quality if forced to shoot equivalent images.

Full frame superiority busted!

Advertisements

Expose To The Right (ETTR) Is Obsolete

6975688377_66d10906ba_o

(Lake Moogerah — underexposed by two stops to save the highlights and exposure adjusted in Lightroom)

Expose to the right (ETTR) is a technique that became popular when digital photography started to pick up. I will not discuss the details of this technique but I’ll try to cover the basics. Before you continue make sure that you understand the concept of exposure. If you are a bit rusty on this topic then consider reading my previous article on understanding exposure.

The goal of ETTR is to maximise your sensor’s capacity to capture data. We know that every stop of exposure is equivalent to doubling the amount of captured light. So imagine if you have a glass that is half full of water, increasing the amount of water by a “stop” would mean filling the glass up to the brim. If we translate this into photography, say, using the zone system, this means that zone IX is practically half of the entire capacity of your sensel, zone VIII is a quarter, zone VII is an 1/8th and so on. That’s basically how camera sensors work. You would want to maximise the capacity of your sensels by forcing them to fill up with photons. It means that you would always want to have a zone IX otherwise you are wasting half of your data.

So why am I saying that this technique is obsolete? After all, digital capture is still digital capture. Sensels still respond linearly to incoming photons. What has changed?

Digital photography has advanced so much in the past five to eight years. In the early days, shooting beyond ISO 400 was a nightmare. I remember shooting with my Canon G10 and I would never dare shoot at ISO 400 unless I really had to. All my images at ISO 400 were just too noisy and were almost unusable. At present, point and shoot cameras can easily shoot at ISO 6400 with very acceptable results.

What does this mean? Recall that ISO has got nothing to do with exposure. Bumping up the ISO does not increase the amount of captured photons. In fact, bumping up the ISO forces your camera to underexpose. For example, if your camera has a base ISO of 100 and you are shooting in broad daylight, your exposure would go something like ISO 100, f/16, 1/125s (basic sunny 16 rule). If you increase your ISO to 200 then the exposure would go f/16 at 1/250s. At ISO 400 you have f/16 at 1/500s. Every time you bump your ISO you are forcing underexposure. That means your sensels would receive half the number of photons for every stop of increment in ISO. What I’m trying to say is that the fact that you can shoot at ISO 6400 is testament to the amazing ability of modern sensors to handle extreme underexposure. If any of these do not make any sense then please go back to that link I provided in the first paragraph. Read and understand the basic concepts of photographic exposure.

Again, every time you increase your ISO beyond the base ISO, you are forcing your camera to underexpose. Bumping up the ISO is the exact opposite of ETTR. It follows that ETTR only ever makes sense when shooting at base ISO. Performing ETTR at higher ISOs is stupid.

Let me explain that previous paragraph with examples and (stupid) counterexamples. Let’s consider shooting during an overcast day. A typical exposure at base ISO of 100 might go f/5.6 at 1/125s. Performing a stop of ETTR would mean shooting at f/5.6 at 1/60s or you can choose to maintain your shutter speed at 1/125s but shoot at f/4 instead. Look what happens when you bump the ISO to 200: the exposure would now read f/5.6 at 1/250s. If you perform a stop of ETTR at ISO 200 you get f/5.6 at 1/125s which is basically the original aperture and shutter speed combo at ISO 100. Your image might be brighter because of the increase in ISO but the truth is that you have NOT performed ETTR at all! It’s the same exposure of f/5.6 at 1/125s. If you want real ETTR at ISO 200 then you would have to shoot at f/5.6 at 1/60s (same as ISO 100 ETTR) but because of your ISO bump your final image loses dynamic range in the highlights! ETTR plus ISO bump is like taking a step forward and two steps backward. It’s stupid.

Again, with cameras capable of shooting natively at ISO 6400 and some of them even going as high as ISO 746123550123656128561249571243865 (looking at you Sony A7S) we know that modern sensors are now very very good at handling FORCED underexposure. But then the other side of the story is that modern sensors are still VERY BAD at handling OVERexposure. Once you clip your highlights there is no way you can recover that data. FACT!

Losing data is not the only problem of overexposure. When you overexpose by force, it is very difficult to judge the tones and colours just by looking at your LCD. When you ETTR, your blue skies will look bright grey, you lose the sunset colours, your shadows become dull. Of course you might be able to “fix it later in the computer” but you have practically deprived yourself the capability to properly judge how your image might look like and make decisions (i.e. adjust exposure) while you still can.

Again, let’s consider the facts:

  1. Cameras can shoot natively at high ISOs which means they can handle extreme underexposure.
  2. Cameras are very bad at handling overexposure.

Is ETTR really worth it? Shouldn’t you give your camera the best fighting chance by utilising its strengths instead of gambling with its weaknesses?

The ETTR ship has sailed. Move on.

Debunking Equivalence Part 2

In my previous post Debunking Equivalence I covered in detail the major flaws of this concept called “equivalence”. Mind you, not everything in equivalence is wrong. Equivalence in field of view and depth of field make total sense. What does not make sense is the equivalence in exposure. This “exposure equivalence” is what full frame fanbois sell to unsuspecting gear heads. It is supposed to prove that full frame is superior to APS-C, m43 and smaller sensor cameras. 

In this post, I will use basic math to debunk the myth. Just enough math that I learned when I was in first grade — seriously.

Recall the equivalence comparison between m43 and full frame:

m43: 25mm, f/5.6, 1/125s, ISO 100

FF: 50mm, f/11, 1/125s, ISO 400

Ignore the ISO settings for now. Let us concentrate on the f-stop and shutter speed settings. The reason, they say, that f/5.6@125 is equivalent to f/11@125 is that both gather the same amount of light by virtue of the difference in focal length. The longer 50mm lens will have the same entrance pupil diameter as the 25mm lens. The difference in focal length of course is proportional to the sensor size. 

Now let us use arbitrary units of measure and consider the ratio X/Y, where X is the total amount of light and Y is the sensor size. Supposing that for m43 we have the ratio 4/8, a full frame sensor (4x area), according to equivalence, would have a ratio 4/32. Again:

m43 = 4/8 vs FF = 4/32

So total light is constant at 4 units and the denominators are the respective sensor size units 8 and 32 for m43 and full frame respectively. Still with me? Obviously, they are not the same. Not in any known universe. This is why for the same amount of light, the full frame will come out two stops underexposed. And this is why equivalence fanbois will insist that an increase in ISO is necessary; the full frame shot is very dark! Now we know that bumping the ISO does not increase the amount of light but will make the image brighter. I’m not sure how to represent that in numbers because nothing has changed really in terms of light. ISO bump is fake. It’s a post-capture operation that does not change the exposure or captured light. Furthermore, an ISO bump introduces noise and that is why equivalence forces the comparison to be performed at the same print size. This method of cheating does miracles for the fanbois. Let’s see how it works:

If we agree to compare at the same print size of 16 units, we now have 

m43: 4/8 upsampled to 4/16

FF: 4/32 downsampled to 4/16

Magic!!! They are now the same! They are equivalent! This is true for any print size therefore equivalence is correct!!! Therefore full frame is superior because at the same f-stop it will have gathered more light! 

Well, not so fast! The amount of light did not change. It was constant at 4 units. The apparent changes in signal performance was not due to light but due to resampling. Do not equate resampling to total light. They are not the same and are completely independent of each other. Resampling is like changing your viewing distance. I can make my crappy shot look cleaner simply by viewing it farther away. Did I change total light by moving away from the image? Stupid isn’t it?

That is the very naive reasoning behind equivalence. Not only is the conclusion stupid but the assumption here is that there is absolutely NO NOISE! Noise is ever present. Noise is present and proportional to the incoming light and a property of the sensor itself. Let’s see what happens when we introduce noise. 

Supposing that noise is 1 unit. We now have:

m43: signal = 4/8, noise = 1/8

FF: signal = 4/32, noise = 4/32

Therefore the signal to noise ratio (SNR) are as follows:

m43 = 4:1

FF = 4:4 or 1:1

The full frame is obviously inferior! It makes sense because it was underexposed by two stops (4x)!!! If you boost the signal by increasing the ISO you are boosting noise as well. In low light situations where noise is more pronounced, a 4:2 SNR for m43 will be 4:8 for full frame. There is more noise than signal in the full frame image! At 4:4 SNR for m43, full frame is at 4:32. There is nothing but noise in the full frame. You just can’t underexpose indefinitely and bump the ISO! That doesn’t work in all situations. This is why images at higher ISOs look bad. There is more noise than signal in low light situations. Yet, equivalence fanbois will try to convince you that ISO 6400 on m43 is the same as two stops underexposure plus ISO 25600 in full frame. It’s not. 

So again, the equivalence fanbois could not accept this fact. At the sensor level, equivalence has made the full frame look really bad. What can they do? Cheat again! Force the comparison to use the same print size. At a print size of 16 units, noise will be increased or decreased proportional to how much you upsample or downsample. We have:

m43: signal = 4/16, noise = 2/16

FF: signal = 4/16, noise = 2/16

So now the SNR for both are equal at 4:2! Can you see how they manipulate the numbers? They are using image size (number of pixels) to circumvent noise and stupidly equate this to light gathering. The total amount of light has not changed. How could anyone possibly attribute the changes in SNR due to resampling to light? It does not make any sense at all! Look closely though because this SNR is for the entire image. Most of the signal will be concentrated on the brighter areas. In the darker areas noise will show its teeth. In instances were full frame is severely underexposed (SNR 4:32) there is no saving it. It would look crap. M43, on the other hand will happily chug along with 1:1 SNR or better. 

This is why when you start comparing two full frame cameras with different resolutions you will notice variations in SNR results at different print sizes (Megapixel Hallucinations). If the SNR changes even when the sensor size is held constant then obviously sensor size does not matter. Therefore total light, being proportional to sensor size, by itself does not tell the whole picture. What matters is the RATIO of total light to sensor size, otherwise known as EXPOSURE. For SNR to be the same, exposure must be the same for sensors with the same properties (i.e. sensel pitch, efficiency, etc…). Size does not matter. 

Equivalence debunked…again!

Olympus: Oops! They Did It Again

I don’t want to be the bearer of bad news but once again Olympus has released a broken camera — the E-M5 II. I discovered from my own testing that the sensor has the same long exposure noise issue that made me return the flagship E-M1. Don’t Olympus read the internet? Do they even listen to their customers?

Looks like the original OMD E-M5 is still the m43 camera to beat if you are into landscape photography. And with that I have canceled my order for the 25mm f/1.8 lens. I can’t see myself investing in Olympus equipment anymore if they keep on releasing broken cameras. 

Be warned. 

Full Frame Mirrorless Don’t Make Sense … Yet

I’m not saying they are bad or useless. They just don’t make any sense yet. Here’s why …

The biggest, if not the only, reason for going mirrorless is size reduction. Everything else, good or bad, about mirrorless are just consequences of size reduction. 

Let’s talk about the good stuff first. 

Although a lot of DSLR shooters hate electronic viewfinders (EVF) they are actually very useful tools. EVFs allow you to see what’s hitting the sensor before you hit the shutter button and that is a very good thing. No more chimping after every shot. You also get a horizon level indicator, histogram, focus peaking and automatic brightness boost among other goodies. I like EVFs. In fact, I feel that I have become a slave of EVF. When I shoot with my DSLR I always have to double check if my camera is giving me the correct exposure values. Not so with EVFs. What I see is what I get. EVFs are a necessary “evil” for going mirrorless. There’s no other way around it unless you want to have a rangefinder like the Leica. 

Lens adaptability is another good consequence of going mirrorless. By getting rid of the mirror, lenses can now be mounted much closer to the sensor thus resulting in smaller lenses. It also means that with a cheap adapter, you could mount just about any full frame lens regardless of brand. Of course this also means that camera bodies can be made thinner and lighter. That’s size reduction in action.

Without the flapping mirror, the camera is quieter. This is essential when you are into wildlife photography or when you need to be discrete during weddings or funerals or even when out in the streets. 

Now on to the disadvantages of going mirrorless.

Mediocre battery life is first on my list. The EVF and the sensor, among other electronics, need to be running all the time otherwise you can’t see anything. This reduces battery life considerably. And since the camera body is much smaller, batteries also need to be smaller which doesn’t really help with the problem. 

EVFs aren’t there yet in terms of speed. When you are shooting sports, the lag can be irritating and/or disastrous. 

Ask a DSLR fanboi and he can tell you more about why going mirrorless is bad. 

Bottomline is that you probably do not want to go mirrorless for its disadvantages but every advantage you get are just direct consequences of size reduction. To reduce the size of camera bodies they needed to remove the mirror and use an EVF. To reduce the size of the body and lenses they needed to bring the lens mount closer to the sensor. Size reduction is the whole point of going mirrorless. 

So with all the pros and cons aside, why am I saying that full frame mirrorless cameras do not make any sense yet? Because they are still HUGE! Yes, the cameras are smaller but the large sensor requires large lenses which defeats the purpose of going smaller. You are better off buying a full frame DSLR instead because the size difference isn’t really that much and with a DSLR you get a more ergonomic grip that helps carry those hernia-inducing heavy lenses. 

So when is full frame mirrorless going to make sense? When manufacturers stop upgrading their DSLRs and you have no other option but to buy mirrorless. This is a big marketing problem especially for the giants like Nikon and Canon. I can see Sony heading in that direction. When was the last time Sony upgaded a DSLR?  It’s very risky but this is exactly what Olympus did. They totally stopped upgrading their DSLRs, went mirrorless and never looked back. Yes, they lost loyal customers but in return they gained new converts because mirrorless m43 makes total sense. They are small. 

Again, full frame mirrorless do not make any sense. Get a full frame DSLR instead. If you really want to go small, buy m43 or APS-C mirrorless cameras. My personal recommendation would be the m43 format because the mount is standard which means you have more lens choices. And did I say they are small? That’s the whole point of going mirrorless — size reduction. 

The Many Faces of Lake Moogerah

I discovered Lake Moogerah by accident. I was driving towards Warwick, a city located southwest of Brisbane, when I stumbled upon this magical place. Since then, I have been camping and taking photos of the location. The spot never disappoints. I would always find something new every time I visit.

For the past two weeks I have shot Lake Moogerah twice and I could not help but wonder how quickly it changes. There is no better way to show that than by giving sample shots of this beautiful place.

moogerah3

That’s Lake Moogerah during sunset when the sun is just kissing the horizon. A few minutes later, the warm light is replaced by fiery clouds:

moogerah2

If you stayed until it gets dark and waited for the moon to rise, you’ll get warm light again. This one is when the moon is just above the horizon:

moogerah1

At close to 11PM when the moon is high above the sky, you’ll get much cooler colours and it looks something like this:

moogerah4

Notice the stars hiding behind the clouds. 🙂

And that’s Lake Moogerah in four shots.

Easiest Way to Get a Good Shot

ship

Here is a very simple tip if you want to capture nice photos: find ONE subject and isolate it from everything else. That’s it.

Why do you think that shallow depth-of-field portrait shots look nice? It’s not just because of the creamy/blurry background but because shallow DoF isolates the subject from any background distraction. If the background is simple and non-distracting you do not need shallow DoF to get a good portrait shot. Studio shots, where the photographer has full control of the environment, are normally shot at f/5.6 or f/8 or even f/16 because the subject is already isolated.

P1160162-small

The main reason why n00bish shots look crap is because beginners tend to cram everything into the frame. This one goes especially to the n00b landscape photographers who would sell their kidneys just to get the widest lens possible. They want it ultra-mega-wide so they could include EVERYTHING in the frame. That’s the quickest way to get a crappy shot. STOP.

boat

Find a subject that you like and have a really good look at it then ask yourself: what is it with this subject that I really like? Is it the entire subject or just some parts of it? Is it because the subject is in a particular environment? If you can’t answer those simple questions then your shot will look crap.

jetty

Once you find your subject, concentrate on it. Isolate it from everything. You may have to zoom in or get closer to your target. Do everything you can to single out the subject then take the shot. Now check your LCD and assess if you like your framing. If you think that it’s too empty or too simple then find something that will complement the subject. Zoom out or get into a different angle. Just make sure, when you do want to include more elements in the frame, that they will enhance the subject and NOT conflict with it.

three-sisters

So again, the quickest way to get a nice shot is to pick ONE subject and make sure that nothing else is in the frame. Go out and try it. You’ll thank me.