Optimizing Dynamic Range in Photoshop -- Part I

Article and Photography by Ron Bigelow

www.ronbigelow.com

Dynamic range is a very important aspect of photography. Basically, the dynamic range of a camera or film determines how much of the tonal range of a scene can be captured in an image. Cameras with a large dynamic range capture more of the tonal range of a scene. Conversely, cameras with smaller dynamic ranges are able to capture less of the tonal range.

In general, photographers love dynamic range; however, digital cameras have a problem in this area. Digital cameras often do not output all of the dynamic range, that they capture, to their image files. In other words, digital cameras (or the computers that convert the images for those that shoot raw) often throw away part of the dynamic range when the information from the sensors is being processed (why they do that will be covered a bit later). So, we pay a lot of money for our new digital cameras only to have them throw away part of the dynamic range (and the detail that goes with it). That is kind of like buying an expensive sports car and then having the auto dealer disconnect three of the engine cylinders before you drive away.

Luckily, for those that shoot raw, that lost dynamic range can be recaptured during the raw conversion process (sorry, those that shoot JPEG can not recapture the lost dynamic range). This series of articles focuses on two goals: 1) laying a foundation for understanding dynamic range and 2) determining how to optimize the raw conversion process in order to maximize the dynamic range of the resultant file.

What is Dynamic Range

From a practical perspective, dynamic range is the range from the darkest to the lightest tones that maintain accurate detail. This definition contains a very important point: the dynamic range only includes those tones that contain accurate detail. In general, digital cameras are able to produce tones that are not capable of maintaining accurate detail (more about this later). These tones are not much use to a photographer, and they should not be considered part of the dynamic range.

Dynamic range is often measured in stops of light. When light is increased by one stop, the amount of light is doubled (going in the other direction, it is cut in half). For instance, a photographer may say that he doubled his exposure by opening up the lens by one stop. Since each stop equates to a doubling of light, the dynamic range in stops can easily be converted to a light ratio for those that prefer. Table 1 shows a comparison of dynamic range expressed in stops and light ratios. As can be seen in the table, a camera with a dynamic range of six stops can capture tones such that the lightest tones are sixty-four times brighter than the darkest tones while a camera with a dynamic range of ten can capture tones such that the lightest tones are 1,024 times brighter than the darkest tones.

Table 1: Dynamic Range
Stops Light Ratio
1
1:2
2
1:4
3
1:8
4
1:16
5
1:32
6
1:64
7
1:128
8
1:256
9
1:512
10
1:1,024
11
1:2,048
12
1:4,096

Why Does Dynamic Range Matter

Figure 1: Image with Dynamic Range Issues

One might wonder why dynamic range matters at all. Basically, it is an issue of the human eye versus the camera. For example, in nature, the dynamic range often very large. This large tonal difference in nature is not a problem for the human eye. The human eye simply adjusts to the changes in brightness. As light enters the eye, it passes through the pupil. The pupil's job is to regulate the amount of light entering the eye. The pupil can change its size depending on the intensity of the light. The pupil's ability to expand and contract is due to the fact that the pupil is surrounded by the iris. The iris is composed of a tissue that connects the pupil to a set of muscles. These muscles cause the pupil to either contract or expand in response changing light intensity. The pupil, the iris, and the associated muscles are responsible for the incredible dynamic range of the human eye. In short, when a person looks at a dark area, the pupil expands and lets in more light. When a person looks at a bright area, the pupil contracts and cuts down the amount of light. Thus, a person can look at a shadow one second and a highlight the next and see detail in both.

On the other hand, the camera has no such ability to adjust like the human eye. When this is combined with the fact that the dynamic range of many cameras is smaller than the dynamic range that is often found in nature, a problem occurs. The camera may not be able to capture all of the tones in the scene. This is show in Figures 1 - 2. Figure 1 shows a desert plant with both very dark and light tones. Figure 2 shows the image's histogram. The histogram clearly shows that both the darkest and the lightest tones have been clipped due to the fact that the dynamic range of the camera was smaller than the dynamic range of the object being photographed.

Figure 2: Clipped Histogram
So, the issue of dynamic range boils down to the fact that the dynamic range of the human eye is usually greater than that of the camera. Thus, the camera may not always be able to capture all of the tones that the human eye can see. When this is the case, some of the tones will be clipped. These tones are lost forever.

What Determines Dynamic Range

Figure 3: Pixel Diagram

The pixel is a major factor in determining dynamic range. Figure 3 shows a greatly simplified diagram of a pixel. Light, in the form of photons (tiny packets of light) arrives at the pixel. The photons from an area slightly larger than the active part of the pixel are focused by a microlens. The photons then pass through a color filter array (also known as a Bayer filter). Finally, the photons enter the pixel. At this point, the photons interact with the semiconductor material of the pixel by transferring their energy to the electrons in the valence orbits of the semiconductor molecules. This gives the electrons enough energy to move to the conduction band. A voltage, that is applied to the pixel, creates a current that moves the electrons to a place where they are stored until their charge can be measured. The measurement of the electrical charge is used to calculate how much light reached the pixel.

The problem that occurs is that a pixel has a limited ability to capture photons and store the electrical charge generated by this process. In essence, a pixel can be compared to a bucket. When it rains, the bucket begins to fill up with raindrops. A person could come along at various points and measure how much water had been captured in the bucket. From this information, the person could determine how much rain had some down. This would be the case until the water level reached the top of the bucket. At this point, the bucket would have reached its maximum capacity and would not be able to collect any more water. Any additional raindrops would just overflow. The bucket would no longer be able to determine how much rain had fallen.

A pixel is similar. When light strikes a pixel, an electrical charge is accumulated. This electrical charge can be measured. From this measurement, the amount of light that reached the sensor can be calculated. This is the case until the electrical charge reaches the maximum capacity that can be stored. At this point, the pixel would not be able to collect any more electrical charge. Any additional photons would, in a sense, overflow. The pixel would not be able to determine how much additional light had been received. When this occurs, the highlights are clipped, and detail is lost. Thus, a pixel that received a 100% of the maximum amount of light that it could record, another that received 200%, and yet another that received even more would all give the same reading.

Back to the bucket example. Imagine that a very large storm front was moving into the area, and someone wanted to measure the amount of rain from the storm. However, the person was afraid that the bucket would overflow. What could the person do? That's easy; the person could get a taller bucket. The taller bucket could hold more water. Thus, it could measure larger amounts of rain.

Similarly, if a camera designer wants to measure larger amounts of light, she can design a larger pixel. Larger pixels can capture more photons and store larger electrical charges. Thus, increasing the size of a pixel increases the upper end of the dynamic range. Thus, one of the major determinants of dynamic range is pixel size. This is one of the reasons that medium format cameras often have such large dynamic ranges. It is because the larger sensors allow for much larger pixels than on DSLRs. Of course, increasing the size of the pixels on a sensor is not as easy as it sounds. The size of the sensor and the signal processing circuitry that is on the sensor limit how big the pixels on a sensor can be.

The lower end of the dynamic range is determined primarily by noise. It was stated earlier that the dynamic range only includes those tones that contain accurate detail. However, maintaining accurate detail in the darker areas is an issue for two reasons. First, the signal-to-noise ratio (SNR) tends to degrade as the tones get darker. Second, the ability of the human visual system to distinguish detail decreases as the light entering the eye decreases. This is primarily a function of the rods (the cells that detect low light levels) in the human eye. As the light striking the rods decreases, the rods performance decreases.

So, as the tones get darker, the noise increases in relation to the signal (image detail) making it increasingly difficult to differentiate the detail from the noise while, at the same time, the eyes' ability to distinguish detail is decreasing. At some point, it becomes impossible to differentiate the image detail from the noise. At this point, the lower end of the dynamic range has been reached. This does not mean that a camera can not record even darker tones. In fact, a camera may be capable of recording several increasingly darker tones. However, these tones are not capable of containing detail that the human eye can distinguish. Thus, these tones are not part of the dynamic range.

So, pixel size and noise are two of the most important factors affecting dynamic range. There are other factors (such as the design of the microlens), but they are generally less significant.

What Does not Determine Dynamic Range

Sometimes, people try to equate bit depth (e.g., eight bit, twelve bit, or fourteen bit files) with dynamic range. I have heard comments that JPEG files have a smaller dynamic range than raw files because the JPEG files are only eight bit while the raw files are usually twelve or fourteen bit. This is an incorrect statement. While it is true that some JPEG files have less dynamic range than files that were converted from raw, it has nothing to do with bit depth (the reasons for the smaller dynamic range will be covered shortly). In fact, an eight bit file can have any dynamic range.

What the bit depth does determine is tonal range. Tonal range is the number of tones from the darkest to the lightest tones. An eight bit file has a tonal range of only 256 tones while a twelve bit file has a tonal range of 4,096 tones and a fourteen bit file has a tonal range of 16,384 tones. In other words, the larger bit depth files have the tones more closely spaced than lower bit files.

Figure 4: Posterization

The reason that cameras with large dynamic ranges often have large bit depths has to do with posterization. As the dynamic range becomes larger, the tones become spaced farther apart. When the image is edited, the tones will likely become spread even farther apart. If the tones become spread too far apart, posterization occurs. When this happens, the colors do not transition smoothly from one to the next. Rather, an abrupt transition occurs that can be seen. This is shown in Figure 4.

To reduce the possibility of posterization, large dynamic range cameras usually have large bit depths so that the tones are spaced closely together. This produces smooth color transitions.

Throwing Away Dynamic Range

Now, back to that original sad thought -- your camera (or the computer in the case of raw) throws away part of its dynamic range. At first, this may seem a bit absurd, but it is actually necessary. The problem has to do with the nature of the information that comes from the pixels. The information that the pixels produce is not an image. As covered above, the information that each pixel produces is a voltage. In fact, the information that each pixel produces is not even digital information; rather, it is an analogue voltage value. This analogue voltage information has to go through a significant amount of processing before it can produce an image. Figure 5 shows the steps that are required to produce a JPEG image.

A quick glance at Figure 5 reveals that there are many steps to producing an image. An image isn't even really created until step seven. In this step, the camera takes what is a bunch of binary numbers and performs a Bayer interpolation. The Bayer interpolation creates the color information.

Figure 5: JPEG Process
Figure 6: Image before Application of Tonal Curve

While the Bayer interpolation creates the color in the image, it is not the color that is seen in the final image. Instead, the color is very dark and has very low contrast and saturation. Figure 6 shows what an image looks like at this stage in the process. Clearly, this is not something that one would be proud to hang on a gallery wall. Further processing is required.

As far as dynamic range is concerned, the key is in step nine. In this step, a rather steep curve is used to lighten the image and increase the contrast. In order to successfully do this, the curve usually clips the tones in the highlights or the shadows (or both). When this happens, dynamic range is lost. Thus, the camera has thrown away part of the dynamic range in order to lighten the image and increase the contrast.

Now is the time for both good news and bad news. First, the good news. Photographers that shoot raw (and know what they are doing) can reclaim that lost dynamic range. This is because the information that the raw file contains has not yet had the curve applied. Thus, the raw file contains the entire dynamic range. Now, it was just stated that the photographers had to know "what they are doing" to reclaim the entire dynamic range. This is because, during the raw conversion, a curve will be applied, and the dynamic range will likely be reduced. However, those photographers that understand how to optimize the raw converter can access all of the dynamic range by adjusting the settings in the raw converter. The process for doing this is covered in the next three articles. Now, the bad news. Photographers that shoot JPEG lose the dynamic range that was clipped, and there is no way to get it back.

 

Factors Affecting Dynamic Range

There are a number of factors that affect dynamic range.

Pixel Size: As covered previously, pixel size affects dynamic range. Smaller pixels tend to have smaller dynamic ranges.

Camera Format: Different camera formats tend to have different dynamic ranges. For example, medium format digital cameras tend to have large dynamic ranges. This is largely due to the fact that their sensors are very large; thus, the pixels are also very large. On the other hand, point-and-shoot cameras tend to have smaller dynamic ranges. This is because the sensors, and therefore the pixels, are very small. Additionally, the noise is usually relatively high in these cameras.

ISO: The SNR decreases as the ISO increases. Thus, higher ISO images tend to have more noise than lower ISO images. This causes a dynamic range problem in the shadows of high ISO shots. The increased noise in the shadows makes it harder to delineate the detail in the shadows. Since the dynamic range only includes those tones where accurate detail can be maintained, as the detail is lost due to the noise, the dynamic range is reduced in the shadows.

JPEG: The curves that are used in at least some cameras (often point-and-shoot cameras) to lighten the image and increase the contrast when producing JPEG images can be very steep. Thus, they have a tendency to clip detail.

Post Processing: The use of image editing tools, such as Curves, can cause additional clipping. If clipping occurs, it will reduce the dynamic range of the image.

Articles

Dynamic Range -- Part II