In the previous article, the posterization that can occur when tonal values are stretched out (i.e. the tonal values are pulled farther apart) was discussed. This posterization often occurs when lightening the shadows. However, posterization is not the only problem that causes image degradation when editing the tones in an image. Whenever some tonal values are stretched out, other tonal values must be compressed (i.e. the tonal values are pushed closer together). This is always the case unless other techniques, such as masking or the Blend-if tool, are used to mitigate the compression. For instance, when Curves is used to lighten the shadows by stretching out the tones, either the midtones or the highlights will be compressed. This compression causes image degradation. An example is shown in Figures 1 and 2. Figure 1 shows a curve used to lighten an image. This curve stretches out the shadow tonal values. However, it also compresses the highlight values. Figure 2 gives the numeric values of the lightest twenty-six tones (230 -- 255) both before and after editing with a curve like the one in Figure 1. For example, the original value of 230 has been increased to 238 and the original value of 245 has been increased to 248.
An examination of Figure 2 reveals a disquieting problem. What used to be separate tones before editing have been compressed into the same tone after editing. The original tones 247 and 248 have both become 250 after editing. Similarly, the original tones 250 and 251 have both become 252 after editing. This compression of tones is called quantization error and results in a loss of tones and image detail. Compression of the tones into fewer tonal spaces can be seen in Figures 3 and 4. Figure 3 shows a histogram before any image editing. Figure 4 shows the same image after the editing has been performed. The right side of this histogram shows where quantization error has caused some of the pixels to "pile up" in the remaining tonal levels resulting in upward spikes in the histogram.
Quantization error is due to the digital nature of digital images. When image editing is performed, Photoshop runs the digital numbers (e.g., tones) through formulas to determine the new numbers. However, the new numbers have to be rounded off to the nearest digital number (e.g., a new tone of 157.43 would be rounded to 157). Consequently, two or more tones can be rounded off to the same tonal number. The information that is rounded off is thrown away forever. Thus, information is lost in the rounding process and quantization error occurs.
Now, the loss of tones in this example is not such a big deal since the compression occurred in the highlights. Since the highlights have a large number of tones (as covered in the previous article), the loss of a few tones can be tolerated in either twelve or fourteen bit images (the loss in eight bit JPEG images might be more noticeable). However, if a curve like the one shown in Figure 5 is used, tonal compression can be a much bigger issue. This curve was used to darken an image. The problem is that it compressed the shadows resulting a loss of shadow tones. Unfortunately, the shadows have few tones to begin with. Reducing the number of shadow tones through quantization error could result in a noticeable loss of image quality in the shadows.
The use of fourteen bit images does not stop quantization error. However, it does make it less noticeable. Since there are many more tones in a fourteen bit image than in a twelve bit image, the loss of the tones due to quantization error will be less noticeable in the higher bit image.
A twelve bit color image has three color channels. Each channel has 4,096 shades. Since the interpolation uses all three channels when calculating the color, a twelve bit color image has 4,0963 = 68,719,476,736 possible colors. However, if a twelve bit image is converted to black and white, the image will have only 4,096 shades of gray. This is because different colors have the same gray value. This is shown in Figures 6 and 7. Figure 6 shows two colors. The first color has the RGB values of 200, 100, 100. The second color has the values of 100, 100, 200. These are clearly distinct colors. However, Figure 7 shows what happens when the colors are converted to grayscale. They have the same tonal values.
Since a fourteen bit file has four times as many gray tones as a twelve bit file, the fourteen bit file has an advantage when editing black and white images (i.e., the fourteen bit file will have less problems with posterization and quantization error).
Bit depth also plays a role in color space selection. The bits of a digital file must be spread across the entire color space that is used with an image. When wide color spaces are used, the bits must be spread farther apart to cover all of the colors. This increases the possibility of posterization (especially in the shadows). Since a fourteen bit file places the colors closer together, due to the increased number of tones, a photographer can use a larger color space with less risk of posterization.
Sometimes, people try to equate bit depth (e.g., eight bit, twelve bit, or fourteen bit files) with dynamic range. I have heard comments that JPEG files have a smaller dynamic range than raw files because the JPEG files are only eight bit while the raw files are usually twelve or fourteen bit. This is an incorrect statement. While it is true that some JPEG files have less dynamic range than files that were converted from raw, it has nothing to do with bit depth. Rather, the dynamic range of a camera is determined primarily by the characteristics of the sensor (e.g., the size of the sensor and the microlens).
What the bit depth does determine is tonal range. Tonal range is the number of tones from the darkest to the lightest tones. A twelve bit file has a tonal range of 4,096 tones, and a fourteen bit file has a tonal range of 16,384 tones. In other words, as covered previously, the larger bit depth files have the tones more closely spaced than lower bit files.
The reason that cameras with large dynamic ranges often have large bit depths has to do with posterization. As the dynamic range becomes larger, the tones become spaced farther apart. When the image is edited, the tones will likely become spread even farther apart. If the tones become spread too far apart, posterization occurs. To reduce the possibility of posterization, large dynamic range cameras usually have large bit depths so that the tones are spaced closely together. This produces smooth tonal transitions.
The advantages of fourteen bit files are lost on those that shoot JPG. JPEG images are converted to eight bit before any image processing is carried out. Thus, they do not benefit from the additional bits.
So, is it worth upgrading to a new camera to get the extra two bits? Each person will have to answer this question for herself. However, if you are not experiencing any significant problems with posterization, quantization error, the use of large color spaces, or a lack of detail in your shadows, the additional bits may not be worth the cost of the upgrade. As for myself, I may buy the next generation of my camera model because I expect a number of upgrades, including a fourteen bit sensor. However, I would not buy the new camera if the move to fourteen bits was the only major improvement.