Category Archives: Equipment

Debunking the Myth of Full Frame Superiority (Again)

 

FFvAPSC

Equivalence does not prove the superiority of full frame (FF) over crop sensor (APS-C, M43) cameras. In fact, equivalence shows that equivalent photos are equivalent in terms of angle of view, depth of field, motion blurring, brightness, etc…including QUALITY. Yes, equivalent photos are equivalent in quality.

Refer to the illustration above where we have a full frame lens (BLUE) on a full frame sensor (GREEN). Some full frame sensors are now capable of shooting in crop mode (APS-C) where only the area in RED is used. When a crop sensor LENS is used on a full frame sensor, only the area in RED is illuminated and the rest of the areas in GREEN are in complete darkness and therefore do not contribute to light gathering. This is also true when the full frame is forced to shoot in crop mode with a full frame lens; the camera automatically crops the image to the area in RED and the rest of the areas in GREEN are thrown away.

As per the illustration above, we can see that the central half of the full frame sensor is really just an APS-C sensor. If indeed, a crop sensor is inferior in terms of light gathering then logic will tell us that every center of a full frame shot will be noisier than the rest of the frame. We know this is not true. The light coming in from the lens spreads evenly throughout the entire frame. Total light is spread over total area. As a matter of fact, the central half is the cleanest because lenses are not perfect and become worse as you move away from the center.

Now suppose we have a full frame 50mm lens in front of a full frame sensor. Notice that the crop mode area (RED) does not capture the entire image that is projected by the 50mm lens. The angle of view is narrower than full frame (GREEN). There are several ways we can capture the entire 50mm view while using crop mode:

  1. move backward
  2. use a wider lens (approx 35mm)

Both methods allow the RED area to capture more of the scene. A wider scene means more light is gathered. It means that if we force the RED area (APS-C) to capture exactly the same image as the GREEN area (FF) we will be forced to capture more light! More light means less noise! In equivalent images, APS-C is actually cleaner than full frame!!!

For example, if we go with option #2 using a wider lens, equivalent photos would be something like this:

RED (APS-C): 35mm, 1/125, f/5.6, ISO 100
GREEN (FF): 50mm, 1/125, f/8, ISO 200

This is exactly what the equivalence theory proposes. The difference in f-stop is to ensure that they have the same depth of field given the same distance to subject. The faster f-stop for APS-C (f/5.6) guarantees that TWICE more light is gathered. Notice that the full frame is now forced to shoot at a higher ISO to compensate for the lesser light coming in due to a narrower aperture given by f/8. So if we are to use the same camera for both shots, say, a Nikon D810 to shoot in normal mode with a 50mm lens and in crop mode using a 35mm lens, the crop mode image will be noticeably better. In equivalent photos, crop mode comes out one stop better. In equivalent photos, the smaller sensor results in BETTER quality!!!

The story does not end here though. The full frame shot has twice the area of the crop mode shot. If both images are printed at the same size, the crop mode shot will need to be enlarged more than the full frame shot. Enlargement results in loss of quality and the full frame image will have an advantage over the crop mode image. Whatever the crop mode shot gained by the increase in gathered light is lost by a proportional amount during enlargement. In the end, both full frame and crop mode shots result in exactly THE SAME print quality!!!

Bottomline, full frame will not give you cleaner images than crop sensors, assuming that they are the same sensor technology (e.g. D800, D7000, K5). They will result in equivalent print quality if forced to shoot equivalent images.

Full frame superiority busted!

Advertisements

Expose To The Right (ETTR) Is Obsolete

6975688377_66d10906ba_o

(Lake Moogerah — underexposed by two stops to save the highlights and exposure adjusted in Lightroom)

Expose to the right (ETTR) is a technique that became popular when digital photography started to pick up. I will not discuss the details of this technique but I’ll try to cover the basics. Before you continue make sure that you understand the concept of exposure. If you are a bit rusty on this topic then consider reading my previous article on understanding exposure.

The goal of ETTR is to maximise your sensor’s capacity to capture data. We know that every stop of exposure is equivalent to doubling the amount of captured light. So imagine if you have a glass that is half full of water, increasing the amount of water by a “stop” would mean filling the glass up to the brim. If we translate this into photography, say, using the zone system, this means that zone IX is practically half of the entire capacity of your sensel, zone VIII is a quarter, zone VII is an 1/8th and so on. That’s basically how camera sensors work. You would want to maximise the capacity of your sensels by forcing them to fill up with photons. It means that you would always want to have a zone IX otherwise you are wasting half of your data.

So why am I saying that this technique is obsolete? After all, digital capture is still digital capture. Sensels still respond linearly to incoming photons. What has changed?

Digital photography has advanced so much in the past five to eight years. In the early days, shooting beyond ISO 400 was a nightmare. I remember shooting with my Canon G10 and I would never dare shoot at ISO 400 unless I really had to. All my images at ISO 400 were just too noisy and were almost unusable. At present, point and shoot cameras can easily shoot at ISO 6400 with very acceptable results.

What does this mean? Recall that ISO has got nothing to do with exposure. Bumping up the ISO does not increase the amount of captured photons. In fact, bumping up the ISO forces your camera to underexpose. For example, if your camera has a base ISO of 100 and you are shooting in broad daylight, your exposure would go something like ISO 100, f/16, 1/125s (basic sunny 16 rule). If you increase your ISO to 200 then the exposure would go f/16 at 1/250s. At ISO 400 you have f/16 at 1/500s. Every time you bump your ISO you are forcing underexposure. That means your sensels would receive half the number of photons for every stop of increment in ISO. What I’m trying to say is that the fact that you can shoot at ISO 6400 is testament to the amazing ability of modern sensors to handle extreme underexposure. If any of these do not make any sense then please go back to that link I provided in the first paragraph. Read and understand the basic concepts of photographic exposure.

Again, every time you increase your ISO beyond the base ISO, you are forcing your camera to underexpose. Bumping up the ISO is the exact opposite of ETTR. It follows that ETTR only ever makes sense when shooting at base ISO. Performing ETTR at higher ISOs is stupid.

Let me explain that previous paragraph with examples and (stupid) counterexamples. Let’s consider shooting during an overcast day. A typical exposure at base ISO of 100 might go f/5.6 at 1/125s. Performing a stop of ETTR would mean shooting at f/5.6 at 1/60s or you can choose to maintain your shutter speed at 1/125s but shoot at f/4 instead. Look what happens when you bump the ISO to 200: the exposure would now read f/5.6 at 1/250s. If you perform a stop of ETTR at ISO 200 you get f/5.6 at 1/125s which is basically the original aperture and shutter speed combo at ISO 100. Your image might be brighter because of the increase in ISO but the truth is that you have NOT performed ETTR at all! It’s the same exposure of f/5.6 at 1/125s. If you want real ETTR at ISO 200 then you would have to shoot at f/5.6 at 1/60s (same as ISO 100 ETTR) but because of your ISO bump your final image loses dynamic range in the highlights! ETTR plus ISO bump is like taking a step forward and two steps backward. It’s stupid.

Again, with cameras capable of shooting natively at ISO 6400 and some of them even going as high as ISO 746123550123656128561249571243865 (looking at you Sony A7S) we know that modern sensors are now very very good at handling FORCED underexposure. But then the other side of the story is that modern sensors are still VERY BAD at handling OVERexposure. Once you clip your highlights there is no way you can recover that data. FACT!

Losing data is not the only problem of overexposure. When you overexpose by force, it is very difficult to judge the tones and colours just by looking at your LCD. When you ETTR, your blue skies will look bright grey, you lose the sunset colours, your shadows become dull. Of course you might be able to “fix it later in the computer” but you have practically deprived yourself the capability to properly judge how your image might look like and make decisions (i.e. adjust exposure) while you still can.

Again, let’s consider the facts:

  1. Cameras can shoot natively at high ISOs which means they can handle extreme underexposure.
  2. Cameras are very bad at handling overexposure.

Is ETTR really worth it? Shouldn’t you give your camera the best fighting chance by utilising its strengths instead of gambling with its weaknesses?

The ETTR ship has sailed. Move on.

Debunking Equivalence Part 2

In my previous post Debunking Equivalence I covered in detail the major flaws of this concept called “equivalence”. Mind you, not everything in equivalence is wrong. Equivalence in field of view and depth of field make total sense. What does not make sense is the equivalence in exposure. This “exposure equivalence” is what full frame fanbois sell to unsuspecting gear heads. It is supposed to prove that full frame is superior to APS-C, m43 and smaller sensor cameras. 

In this post, I will use basic math to debunk the myth. Just enough math that I learned when I was in first grade — seriously.

Recall the equivalence comparison between m43 and full frame:

m43: 25mm, f/5.6, 1/125s, ISO 100

FF: 50mm, f/11, 1/125s, ISO 400

Ignore the ISO settings for now. Let us concentrate on the f-stop and shutter speed settings. The reason, they say, that f/5.6@125 is equivalent to f/11@125 is that both gather the same amount of light by virtue of the difference in focal length. The longer 50mm lens will have the same entrance pupil diameter as the 25mm lens. The difference in focal length of course is proportional to the sensor size. 

Now let us use arbitrary units of measure and consider the ratio X/Y, where X is the total amount of light and Y is the sensor size. Supposing that for m43 we have the ratio 4/8, a full frame sensor (4x area), according to equivalence, would have a ratio 4/32. Again:

m43 = 4/8 vs FF = 4/32

So total light is constant at 4 units and the denominators are the respective sensor size units 8 and 32 for m43 and full frame respectively. Still with me? Obviously, they are not the same. Not in any known universe. This is why for the same amount of light, the full frame will come out two stops underexposed. And this is why equivalence fanbois will insist that an increase in ISO is necessary; the full frame shot is very dark! Now we know that bumping the ISO does not increase the amount of light but will make the image brighter. I’m not sure how to represent that in numbers because nothing has changed really in terms of light. ISO bump is fake. It’s a post-capture operation that does not change the exposure or captured light. Furthermore, an ISO bump introduces noise and that is why equivalence forces the comparison to be performed at the same print size. This method of cheating does miracles for the fanbois. Let’s see how it works:

If we agree to compare at the same print size of 16 units, we now have 

m43: 4/8 upsampled to 4/16

FF: 4/32 downsampled to 4/16

Magic!!! They are now the same! They are equivalent! This is true for any print size therefore equivalence is correct!!! Therefore full frame is superior because at the same f-stop it will have gathered more light! 

Well, not so fast! The amount of light did not change. It was constant at 4 units. The apparent changes in signal performance was not due to light but due to resampling. Do not equate resampling to total light. They are not the same and are completely independent of each other. Resampling is like changing your viewing distance. I can make my crappy shot look cleaner simply by viewing it farther away. Did I change total light by moving away from the image? Stupid isn’t it?

That is the very naive reasoning behind equivalence. Not only is the conclusion stupid but the assumption here is that there is absolutely NO NOISE! Noise is ever present. Noise is present and proportional to the incoming light and a property of the sensor itself. Let’s see what happens when we introduce noise. 

Supposing that noise is 1 unit. We now have:

m43: signal = 4/8, noise = 1/8

FF: signal = 4/32, noise = 4/32

Therefore the signal to noise ratio (SNR) are as follows:

m43 = 4:1

FF = 4:4 or 1:1

The full frame is obviously inferior! It makes sense because it was underexposed by two stops (4x)!!! If you boost the signal by increasing the ISO you are boosting noise as well. In low light situations where noise is more pronounced, a 4:2 SNR for m43 will be 4:8 for full frame. There is more noise than signal in the full frame image! At 4:4 SNR for m43, full frame is at 4:32. There is nothing but noise in the full frame. You just can’t underexpose indefinitely and bump the ISO! That doesn’t work in all situations. This is why images at higher ISOs look bad. There is more noise than signal in low light situations. Yet, equivalence fanbois will try to convince you that ISO 6400 on m43 is the same as two stops underexposure plus ISO 25600 in full frame. It’s not. 

So again, the equivalence fanbois could not accept this fact. At the sensor level, equivalence has made the full frame look really bad. What can they do? Cheat again! Force the comparison to use the same print size. At a print size of 16 units, noise will be increased or decreased proportional to how much you upsample or downsample. We have:

m43: signal = 4/16, noise = 2/16

FF: signal = 4/16, noise = 2/16

So now the SNR for both are equal at 4:2! Can you see how they manipulate the numbers? They are using image size (number of pixels) to circumvent noise and stupidly equate this to light gathering. The total amount of light has not changed. How could anyone possibly attribute the changes in SNR due to resampling to light? It does not make any sense at all! Look closely though because this SNR is for the entire image. Most of the signal will be concentrated on the brighter areas. In the darker areas noise will show its teeth. In instances were full frame is severely underexposed (SNR 4:32) there is no saving it. It would look crap. M43, on the other hand will happily chug along with 1:1 SNR or better. 

This is why when you start comparing two full frame cameras with different resolutions you will notice variations in SNR results at different print sizes (Megapixel Hallucinations). If the SNR changes even when the sensor size is held constant then obviously sensor size does not matter. Therefore total light, being proportional to sensor size, by itself does not tell the whole picture. What matters is the RATIO of total light to sensor size, otherwise known as EXPOSURE. For SNR to be the same, exposure must be the same for sensors with the same properties (i.e. sensel pitch, efficiency, etc…). Size does not matter. 

Equivalence debunked…again!

Olympus: Oops! They Did It Again

I don’t want to be the bearer of bad news but once again Olympus has released a broken camera — the E-M5 II. I discovered from my own testing that the sensor has the same long exposure noise issue that made me return the flagship E-M1. Don’t Olympus read the internet? Do they even listen to their customers?

Looks like the original OMD E-M5 is still the m43 camera to beat if you are into landscape photography. And with that I have canceled my order for the 25mm f/1.8 lens. I can’t see myself investing in Olympus equipment anymore if they keep on releasing broken cameras. 

Be warned. 

Full Frame Mirrorless Don’t Make Sense … Yet

I’m not saying they are bad or useless. They just don’t make any sense yet. Here’s why …

The biggest, if not the only, reason for going mirrorless is size reduction. Everything else, good or bad, about mirrorless are just consequences of size reduction. 

Let’s talk about the good stuff first. 

Although a lot of DSLR shooters hate electronic viewfinders (EVF) they are actually very useful tools. EVFs allow you to see what’s hitting the sensor before you hit the shutter button and that is a very good thing. No more chimping after every shot. You also get a horizon level indicator, histogram, focus peaking and automatic brightness boost among other goodies. I like EVFs. In fact, I feel that I have become a slave of EVF. When I shoot with my DSLR I always have to double check if my camera is giving me the correct exposure values. Not so with EVFs. What I see is what I get. EVFs are a necessary “evil” for going mirrorless. There’s no other way around it unless you want to have a rangefinder like the Leica. 

Lens adaptability is another good consequence of going mirrorless. By getting rid of the mirror, lenses can now be mounted much closer to the sensor thus resulting in smaller lenses. It also means that with a cheap adapter, you could mount just about any full frame lens regardless of brand. Of course this also means that camera bodies can be made thinner and lighter. That’s size reduction in action.

Without the flapping mirror, the camera is quieter. This is essential when you are into wildlife photography or when you need to be discrete during weddings or funerals or even when out in the streets. 

Now on to the disadvantages of going mirrorless.

Mediocre battery life is first on my list. The EVF and the sensor, among other electronics, need to be running all the time otherwise you can’t see anything. This reduces battery life considerably. And since the camera body is much smaller, batteries also need to be smaller which doesn’t really help with the problem. 

EVFs aren’t there yet in terms of speed. When you are shooting sports, the lag can be irritating and/or disastrous. 

Ask a DSLR fanboi and he can tell you more about why going mirrorless is bad. 

Bottomline is that you probably do not want to go mirrorless for its disadvantages but every advantage you get are just direct consequences of size reduction. To reduce the size of camera bodies they needed to remove the mirror and use an EVF. To reduce the size of the body and lenses they needed to bring the lens mount closer to the sensor. Size reduction is the whole point of going mirrorless. 

So with all the pros and cons aside, why am I saying that full frame mirrorless cameras do not make any sense yet? Because they are still HUGE! Yes, the cameras are smaller but the large sensor requires large lenses which defeats the purpose of going smaller. You are better off buying a full frame DSLR instead because the size difference isn’t really that much and with a DSLR you get a more ergonomic grip that helps carry those hernia-inducing heavy lenses. 

So when is full frame mirrorless going to make sense? When manufacturers stop upgrading their DSLRs and you have no other option but to buy mirrorless. This is a big marketing problem especially for the giants like Nikon and Canon. I can see Sony heading in that direction. When was the last time Sony upgaded a DSLR?  It’s very risky but this is exactly what Olympus did. They totally stopped upgrading their DSLRs, went mirrorless and never looked back. Yes, they lost loyal customers but in return they gained new converts because mirrorless m43 makes total sense. They are small. 

Again, full frame mirrorless do not make any sense. Get a full frame DSLR instead. If you really want to go small, buy m43 or APS-C mirrorless cameras. My personal recommendation would be the m43 format because the mount is standard which means you have more lens choices. And did I say they are small? That’s the whole point of going mirrorless — size reduction. 

Canon 5DS: Why I think it’s crazy

Fifty megapixels! Fifty! On a measly 35mm sensor!

Let me be very blunt about this. It’s a stupid idea.

How much resolution do you really need? Here’s a hint: how close do you have to sit in front of your 60″ full HD TV before you start noticing the individual dots? Have you EVER printed any of your shots as big as a 60″ screen? Do you know that a full HD TV is only 2Mp? And if you are not satisfied with full HD, how about a 60″ 4K TV with an effective resolution of only 8Mp?

Let me ask you, have you ever been to a drive-in movie? They are projecting standard definition movie into a very very big screen and yet we do not really notice the pixelation. That’s basically an image that is smaller than 2Mp “printed” as wide as an entire street block! I have never heard of anyone complaining about drive-in movie resolution.

What does 50Mp imply? It means that you will need more storage and more computer processing power. It also means that you can no longer just shoot without a tripod at the usual shutter speeds because even very minor movements become very obvious in your photos. Not only that. It also means that you need to be shooting with premium lenses. At 50Mp, you need a lens that peaks at f/5.6 at the minimum. In case you are unaware, you also need to shoot at f/5.6 or wider to fully utilise the entire 50Mp because shooting narrower than that results in a massive drop in effective resolution. So, if you can’t shoot at f/8 and beyond then the 5DS is really not meant for landscape photography. Some say that the 5DS is meant for studio work where you shoot portraiture and fashion. Who prints their portrait shots that big? If you do want to print at billboard sizes then you only need 2Mp (as in our drive-in movie example). 

It seems to me that manufacturers struggle to improve sensor performance so that they can increase the megapixel count. In the case of Canon, why don’t they concentrate on improving their mediocre sensors instead of engaging in the megapixel slugfest? 

It doesn’t make any sense but I’m sure somebody will find some good use for such massive images. Go ahead and buy the 5DS if it scratches your itch.

Understanding the Effects of Diffraction (Part 2)

This article is a continuation of my previous post on understanding the effects of diffraction. That article has caused a long-winded discussion because some people decided to go deeper in the discussion of diffraction without fully understanding some fundamental concepts. Add to that some bogus resolution graphs and the discussion went from bad to shite.

In the interest of further learning, let’s go back to the very basic principles behind lenses and light.

LENS ABERRATION

The main purpose of a photographic lens is to focus light into the camera’s sensor. Ideally, an incoming point light source is projected into the sensor as a point. The reality is not quite that simple. Light rays near the center of the lens just pass straight through the glass without any problems. However, light rays that do not pass through the center will have to bend so as to meet with the other light rays towards the same focal point. The farther the light ray is from the center, the sharper it has to bend. The problem here is that lenses are not perfect. These imperfections or aberrations result in imprecise bending of light. Light rays near the edges of the glass don’t quite hit the focal point. Some of them will fall just before the sensor and some of them will fall after the sensor. The point light source then is projected into the sensor no longer as a point but something that is much larger. Refer to the simple illustration below. The red ray hits the focal point, the blue ray almost hits the focal point but the green ray which is very near the edge totally misses it.

Screen Shot 2014-10-21 at 8.29.24 pm

There are ways to work around lens aberrations. The most common method is by closing down the pupil to eliminate light rays that are near the edges of the lens. In photography, this is what happens when you close down or “stop down” the aperture. In the illustration below, the narrow pupil has eliminated the out-of-focus green ray leaving only the red and blue rays that are more focused.

Screen Shot 2014-10-21 at 8.27.30 pm

The result is a smaller projected point that is truer to the original point source. The overall image that is projected into the sensor will look sharper. The lens’es performance has therefore improved by utilising only the center of the glass by closing down the pupil. The downside though is that since the pupil has eliminated other light rays, the resulting image will also look darker. Bottom line is that you will have to trade sharpness with brightness.

DIFFRACTION

As discussed above, closing down the pupil improves the performance of the lens. You can make the pupil as narrow as you want and the lens performance will improve proportionally.

There is a problem though that is not quite the fault of the lens itself. This problem is attributed to a property of light. Light changes direction when it hits edges or when it passes through holes. This type of change of direction is called diffraction. Diffraction is ever present as long as there is something that is blocking light. So although a narrower pupil improves lens performance, light goes out-of-control when it passes through a narrow opening. The narrower the pupil, the more that light changes direction uncontrollably. It’s like squeezing a hose with running water. The tighter you squeeze, the wider the water spreads. In the end, light rays will still miss the focal point and we are back to the same dilemma where our point light source is now projected at a much bigger size on the sensor.

DIFFRACTION-LIMITED LENS

We are now ready to understand what a diffraction-limited lens means.

Recall that depending on the size of the pupil, light rays that are farther away from the center of the lens will miss the focal point thus causing a point light source to be projected much larger on the sensor. Let’s assume for now that this point source is projected with a much larger diameter, X, on the sensor.

Now forget for a moment that the given lens has problems and is perfect with no aberrations whatsoever. Recall that at the same pupil size, light diffracts (spreads) in such a way that will cause some of the light rays to miss the focal point and again resulting in a larger projected point of diameter Y.

So now we have two different sizes of the projected point: size X caused by lens aberrations and size Y caused by diffraction (assuming that the lens was perfect).

If X is smaller than Y then the lens is said to be diffraction-limited at that pupil size or aperture. This means that the main contributor to image softness is diffraction instead of lens imperfections. The optimum performance of the lens is the widest aperture in which X remains smaller than Y. Simple.

If X is larger than Y, the problem becomes a bit more complicated. It means that lens imperfections are more dominant compared to diffraction and therefore you can choose to make the aperture narrower to improve lens performance. Stopping down will of course decrease X but will increase Y. It becomes a delicate balancing act between lens imperfection and diffraction. This is a common problem with cheap kit lenses. At larger apertures, kit lenses have aberrations so bad that the image they produce look soft. So you stop down to f/8 or f/11 and by then diffraction kicks in causing the image to soften. It’s a lose-lose situation. That is why premium lenses are expensive. They are sharp wide open where diffraction is very negligible.

A lens that is diffraction-limited at f/5.6 is considered very good. A lens that is diffraction-limited at f/4 is rare. A lens that is diffraction-limited at f/2.8 is probably impossible.

Let’s summarise the discussion:

1. Lenses are not perfect. Aberrations will cause the light rays to miss the focal point thus resulting in loss of sharpness.
2. Lens performance improves as you stop down the aperture.
3. Diffraction is a property of light that forces it to change direction when passing through holes. This causes light rays to miss the focal point thus resulting in loss of sharpness.
4. Diffraction is always present and worsens as you stop down the aperture.
5. A lens is diffraction-limited at a given aperture if the effects of aberrations are less pronounced compared to the effects of diffraction at that aperture.

That’s it for now. In the next article, we will discuss the effects of lens aberrations and diffraction on sensors.