To understand what the excitement is all about, we first have to understand a little bit about bits.
Within each digital camera is a chip called a sensor. Despite its small size, this sensor is the most expensive and complicated part of the entire digital camera. The sensor is the device that collects and processes the light that is used to create an image; it takes the place of the film that is used in traditional cameras. Each sensor is composed of an array (rectangle) of tiny pixels (photodiodes). Each pixel is composed of a light sensitive semiconductor material.
Light, in the form of photons (tiny packets of light), arrives at each pixel. The light, from an area slightly larger than the active part of the pixel, is focused by a microlens. The light then passes through a color filter array (also known as a Bayer filter). Finally, the light enters the pixel. At this point, the light interacts with the semiconductor material of the pixel to create an electrical charge.
The pixel now has an electrical charge. Of course, the same thing was happening with all of the other pixels in the sensor. For example, in the case of a ten megapixel camera, there would be approximately ten million pixels each with its own electrical charge waiting to be processed into a beautiful image.
Now that the pixels have all those electrical charges, the work of processing those charges into meaningful information that can be used to create an image begins. Figure 2 shows a simplified flowchart of the raw process and subsequent processing.
Let's go over the steps, in Figure 1, that create a raw file.
Currently, the raw files of many digital cameras are twelve bits. That means that each pixel can register 212 = 4,096 levels of light intensity (after conversion by the ADC). In other words, each pixel can render 4,096 shades. Traditionally, 0 represents pure black and 4,095 represents pure white. As you go from 0 to 4,095, the shades go from dark to light. Previously, it was mentioned that some pixels measured red light, some green, and some blue. Therefore, there are 4,096 possible shades of red, 4,096 of green, and 4,096 of blue. When the Bayer interpolation does its magic to calculate a color for each pixel, it uses the color information for each pixel and its neighboring pixels. Since the interpolation is using information from all three colors, there are 4,0963 = 68,719,476,736 possible colors with a twelve bit raw file.
The raw files of some of the newer cameras are fourteen bit. That means that each pixel can register 214 = 16,384 levels of light intensity. Now, when the Bayer interpolation does its magic, there are 16,3843 = 4,398,046,511,104 possible colors. That is sixty-four times more colors than the twelve bit file!
It turns out that the human eye can only see about 16,000,000 colors. In other words, the human eye can not tell the difference between many of the extra colors that can be produced from a fourteen bit file. So, if we can not see all those extra colors, what's the big deal about having fourteen bit files? Well, it turns out that the sensor in the camera plays a dirty, little trick on you.
The little trick is that most digital camera sensors are linear devices. What that means is that when the amount of light that reaches a sensor is doubled, the output of the sensor is doubled. The problem starts to reveal itself when we look at bits in conjunction with the dynamic range of the sensor. Dynamic range is a measure of the span of tonal values over which a device (in this case a sensor) can hold detail. In other words, it is the tonal distance from the darkest point at which the device holds detail to the lightest point. Dynamic range is measured in stops of light. When light is increased by one stop, the amount of light is doubled (going in the other direction, it is cut in half). For instance, a photographer may say that he doubled his exposure by opening up the lens by one stop.
For our purposes, we will assume that you have a camera with a dynamic range of nine stops. The shades that an individual sensor can render in a file must be spread across those nine stops. The problem is that those shades are not spread evenly across the dynamic range of the camera.
Now, let's do a little analysis for a pixel that will output its data to a twelve bit file. For this analysis, it must be kept in mind that all of the numbers that are being generated represent the process prior to step 7 in Figure 1. In other words, these numbers represent the state of the information before any tonal curves (i.e., gamma or transfer function) have been applied. They do not represent the final file.
Suppose that a sensor was exposed until the pixels, that received the most light, could accept no more light. In the case of our nine stop dynamic range camera, the sensor would have received nine stops of light. That is to say that the brightest pixels in the sensor would have received nine stops of light. The brightest pixels would be full; these pixels would have reached their full well capacity. Image A in Figure 2 shows such a sensor with the brightest pixels at full well capacity. Now, with a twelve bit ADC, this sensor is capable of rendering 4,096 shades as covered above.
As we move on to analyze the situation shown in Figure 2, the key to understanding what is happening is to remember that, when the exposure is reduced by one stop, the light is reduced by half. Since sensors are linear devices, when the light is reduced by half, the sensor will only be able to render half as many shades.
Now, not all of the pixels received a full nine stops of light. If we ignore the brightest pixels (the ones that received nine stops of light) and look at the pixels that are left, we have the situation shown in Image B in Figure 2. The brightest pixels that are left received eight stops of light (half as much light as the pixels that received nine stops of light). Since sensors are linear and the brightest pixels in Image B received only half as much light as the brightest pixels in Image A, the pixels in Image B would be able to render only half as many shades. Thus, the pixels in image B rendered only 2,048 shades. Since the pixels in Image A (with nine stops of exposure) rendered 4,096 shades and the pixels in Image B (with eight stops of exposure) rendered only 2,048 shades, the ninth stop of light was responsible for rendering the other 2,046 shades. In other words, the brightest stop of dynamic range (the ninth stop) used up half of all the available shades.
The procedure repeats itself. If we ignore the brightest pixels in Image B (the ones that received eight stops of light) and look at the pixels that are left, we have the situation shown in Image C. The brightest pixels that are left received seven stops of light. Since the brightest pixels in Image C received only half as much light as the brightest pixels in Image B, the pixels in Image C would be able to render only half as many shades. Accordingly, the pixels in Image C rendered 1,024 shades. Since the pixels in Image B (with eight stops of exposure) rendered 2,048 shades and the pixels in Image C (with seven stops of exposure) rendered 1,024 shades, the eighth stop of light was responsible for rendering the other 1,024 shades. In other words, the second brightest stop of dynamic range (the eighth stop) used up one fourth of all the available shades.
At this point, we can see that the two brightest stops render 75% of all the shades the camera is capable of producing. The rest of the images in Figure 2 show that, as we continue to work down the dynamic range, the camera is capable of rendering less and less shades. Eventually, one stop of light is reached. This last stop of light is capable of rendering only 16 shades.
The exact same process can be carried out for a fourteen bit sensor. The difference is that the sensor starts off with 16,384 shades. Table 1 summarizes the distribution of the shades across the dynamic range for both twelve and fourteen bit files.
|LIGHT LEVEL||12 Bits||
As can be seen from the chart, the shades are not evenly distributed over the nine stops of dynamic range. More of the shades are allocated to the brightest areas, and far fewer shades are allocated to the darker areas. This causes problems for the shadows -- there are not many shades to render the shadows. This results in less detail in the shadows than in other areas of the image that received more light.
This problem now gets compounded by the human visual system. While the sensor may be a linear device, the human visual system is not. The human visual system is more sensitive to some amounts of light than others. In Particular, the human visual system is more sensitive to shadows than highlights. What this means is that increasing the amount of light in a shadow area will register a larger impact on the visual system than increasing the amount of light, by the same percentage, in the highlights. We now have a situation where we have the least amount of data in the area where the visual system is the most sensitive. This is where the fourteen bit file has an advantage. A fourteen bit file has more shades in the shadows than a twelve bit file. In fact, it has four times as many shades (for each of the three colors). This allows a fourteen bit file to render more shadow detail than a twelve bit file.
Now, it is also true that a fourteen bit file has four times as many shades (for each of the three colors) in the highlights as a twelve bit file. However, even a twelve bit file has so many shades in the highlights that the additional shades of a fourteen bit file do not noticeably improve the highlight detail.
In short, one of the biggest advantages of a fourteen bit file is the additional detail in the shadows.
Generally, colors in an image blend gradually from one color to the next. It is impossible for the human eye to tell where one color stops and the next one picks up. However, in some cases, the transition from one color to the next can actually be seen. This usually occurs in areas of little detail. The result is an image where bands appear to run across the image. This problem is known as posterization (also known as banding). Posterization is highly undesirable, and it is particularly a problem in the shadows.
In essence, posterization occurs when image editing causes too few tones to be spread too far apart. A typical example is when Curves is used to lighten the shadows in an image. Curves takes the original tones and runs the numerical values of the tones through a formula to create the numerical values for the new tones. Figure 3 shows an example of what happens for one set of shadow tones when modified by a particular Curves adjustment. It can easily be seen that the original tones increase only one unit from any tone to the adjacent tone. However, after Curves, the tones are spread farther apart. For example, with the original tones, going from a tone of one to a tone of two gave a one unit increase in tone. However, after editing, the tonal value of one became a value of seven, and the tonal value of two became a value of eleven. Now, the difference between these two tones has become four units. A stronger adjustment would have spread the tones even farther apart. When editing spreads the tones far enough apart, the transition between tones can be seen and posterization occurs.
Since a fourteen bit file has more tones than a twelve bit file, the tones in a fourteen bit file are spaced closer together. Thus, editing is less likely to produce posterization in a fourteen bit file than in one of twelve bits. This is particularly important in the shadows where there are few tonal values with which to start.