Up to this point, two components of color have been covered: light and the object that the light illuminates. It has been shown that the color reflected by an object depends not on the object alone but on the object and the light that strikes the object. However, from a human perspective, this is not the full story. When a human observes color, there is one more very important component to the process -- the human. For the human eye and brain change what a human sees when a human observes color. As such, it is very important to understand how a human perceives color and how the human biology and signal processing affect the color that is observed.
The human eye is an incredibly complex and wonderful device. Its job is to gather and to a certain degree process light and to turn that light into a signal that can be sent to the brain for further processing. Figure 1 shows a simple diagram of the human eye. The story of humans' ability to detect and process light starts at the front of the eye. The cornea gathers and focuses the light that enters the eye. The cornea provides about three quarters of the focusing power of the eye. However, the cornea can not be adjusted to change the focus of the eye.
The light then passes through the pupil. The pupil's job is to regulate the amount of light entering the eye. The pupil can change its size (i.e., expand or contract) depending on the intensity of the light striking the eye. The pupil's ability to expand and contract is due to the fact that the pupil is surrounded by the iris. The iris is composed of a tissue that connects the pupil to a set of muscles. These muscles cause the pupil to either expand or contract in response to changing light intensity. The pupil, iris, and associated muscles are responsible for the incredible dynamic range of the human eye. Any experienced photographer knows that the human eye can see a larger dynamic range than either film or digital sensor. As photographers, we often use graduated neutral density filters or digital techniques (e.g., using two or more exposures to capture the entire dynamic range of a scene) to deal with the fact that our cameras can not capture the entire range of tones in some scenes. Luckily, our eyes do not have that problem. When the light is very bright, the muscles and iris cause the pupil to contract. This reduces the amount of light entering the eye and allows us to see detail in the bright areas. When the light is dim, the muscles and iris cause the pupil to expand. This increases the amount of light entering the eye and allows us to see detail in the shadows.
After passing through the pupil, light enters the lens. The lens focuses the light. Actually, the power of the lens to focus light is far less than the cornea. However, the lens has the ability to adjust the focus while the cornea does not. The ability of the lens to adjust the focus is due to a set of muscles that change the curvature of the lens. As the curvature of the lens changes, the focus changes. Thus, the lens plays an extremely important part in allowing the eye to produce a sharp image.
So, the front of the eye consists of the cornea, pupil, iris, and lens. These work together to gather, regulate, and focus the light.
As the light exits the front of the eye, it passes through the vitreous humor. The vitreous humor is a clear, thick liquid that is composed mostly of water. This liquid passes the light to the back of the eye.
From a photographer's point of view, the back of the eye is where all of the interesting stuff happens. For the back of the eye plays a large part in determining color.
The back of the eye is called the retina. The retina is a sheet of neural tissue. This tissue actually has several layers. The photoreceptors which detect light are at the back of the retina. Thus, before being detected by the photoreceptors, the light must first pass through several neural layers. Fortunately, these layers are transparent. However, at the center of the eye, is a special spot called the fovea. At the fovea, the neural layers covering the photoreceptors are pulled back so that the light directly strikes the photoreceptors without first passing through the other neural layers. This produces a sharper image than at the other parts of the retina. Thus, human vision is sharpest at the center of the eye due to the fovea.
Also located in the retina is the optic disk. The optic disk is the point where the optic nerve connects with the eye. The optic nerve carries the signals from the eye to the brain for further processing. The optic disk has no photoreceptors; thus, the optic disk is a blind spot.
The human eye has two types of photoreceptors: cones and rods. The cones are less sensitive to light. Thus, they are used primarily in bright light conditions (e.g., daylight). The rods are far more sensitive to light. Thus, they are used in dim light conditions (e.g., dusk). However, the cones have one big advantage. There are three types of cones. Each cone detects a different range of wavelengths. Thus, the cones are used to determine color. On the other hand, there is only one type of rod, and it can not be used to determine color. This explains why humans see color during the day and shades of gray at night. During the day, when there is plenty of light, the cones are active and are processing color information. At night, the less sensitive cones are not functioning very well in the dim light conditions. However, the much more sensitive rods are active, but the rods inability to handle color information prevents us from seeing in color in the dim light.
Each of the three types of cones has a different pigment that responds to a different range of wavelengths. This is illustrated in Figure 2 which shows the response of each of the three types of cones to the wavelength of light.
The cone represented by the curve on the right is often referred to as the red cone due to its sensitivity to the longer, red wavelengths. The cone represented by the middle curve is referred to as the green cone due to its sensitivity to the green wavelengths. The cone represented by the curve on the left is referred to as the blue cone because it is sensitive to the shorter, blue wavelengths.
The first thing that should become obvious from looking at Figure 2 is that the cones are not equally sensitive to all colors. For instance, the cones are more sensitive to the greens, but they are less sensitive to the blues and the longer wavelengths of the reds. In practical terms, that means that we do not see all colors as being equally bright. However, the sensitivity of the human eye to color is further complicated by the fact that the eye does not have an equal number of each of the three types of cones. About two thirds of all of the cones are red cones. Most of the rest are green cones. Only a few of the cones are blue cones. Consequently, we are sensitive to red because we have a lot of red cones. We are sensitive to green because the pigments in the green and red cones are sensitive to green. On the other hand, we are not very sensitive to blue light because we don't have very many blue cones and, as seen in Figure 2, the pigments in the cones are just not very sensitive to much of the blue spectrum.
In short, our eyes will see certain colors as being brighter that others even when there is an equal amount of the colors in the light. This is demonstrated in Figures 3 and 4. Which color is brighter in Figure 3? Most people will say that the green is brighter than the blue. However, Figure 4 shows the true story. Once the image has been desaturated, it can be seen that both the green and the blue were of equal brightness.
So, what is the moral of the story so far? Our eyes do not exactly see the colors that are in front of us. In a manner, our eyes "lie" to us about the colors in our surrounding environment. Due to the differing sensitivities of our eyes to the different wavelengths of light and the different numbers of the three types of cones, we see a somewhat different color spectrum than that of the color that enters our eyes. In essence, we are biologically programmed to be more sensitive to certain colors than to others.
From this point on, the process that the eye and brain follow to determine color is not as straight forward as one might at first suspect. A problem arises because the response of the cones depends on both the intensity and the wavelength of light. A glance back at Figure 2 shows why. Let's consider the response of the green cones when blue light enters a person's eye. As seen in Figure 2, the green cones can detect blue light, but they are not very sensitive to the blue light. So, the green cones would produce a small signal. Now, if the amount of blue light was increased, the signal from the green cones would naturally increase. However, if the intensity of the light was kept the same, but the color of the light was changed to green, the response of the green cones would also increase because they are more sensitive to green light. In other words, a change in the response of the photoreceptors can be due to a change in the amount of light or to a change in the wavelength of the light.
The brain resolves this dilemma by comparing the response of the three types of cones to a change in light. Figures 5 -- 7 illustrate a simple example using just two types of cones: the blue cones and the green cones. Figure 5 shows the original signal from the two types of cones in response to a blue light being shined on them. Both the green and the blue cones can detect blue light. However, the blue cones are somewhat more sensitive to blue light, so they have a larger signal. Figure 6 shows what happens if the intensity of the light is increased. Both the blue and the green cones increase their response, and they do so proportionally. In other words, while both types of cones increased in signal, the ratio of the signals does not change. On the other hand, Figure 7 shows what happens if the light intensity is kept the same but the wavelength of the light is changed from blue to green. The blue cones are not very responsive to green wavelengths, so the response of the blue cones drops to almost nothing. The green cones are more sensitive to green wavelengths than to blue, so the signal of the green cones increases sharply. Consequently, a change in the wavelength of the light caused a change in the ratio of the signals from the blue and green cones.
Thus, the brain is able to distinguish between changes in light intensity and changes in wavelength by comparing the responses of the various cones to each other.
So far, we have dealt with only the three colors to which the cones are most sensitive: red, green, and blue. However, the world around us is filled with a huge number of colors. If the brain only gets signals about red, green, and blue, how is it that we see so many other colors? Again, the brain resorts to comparing the signals from the different types of cones. By analyzing the signals from adjacent cones, the brain can produce any color visible to the human eye.
As an example, Figure 8 shows what happens when certain amounts of red signal, green signal, and blue signal from the cones are received by the brain. The result is brown. Thus, the brain has detected the color brown even though the eyes have no brown cones.
The brain's ability to mix signals from the three types of cones to detect all of the colors that we see creates an interesting phenomenon -- the brain is not able to differentiate between pure colors and mixed colors. A pure color is a color that consists of light of primarily a single wavelength. Figure 9 shows a pure color that is composed of a narrow band of wavelengths that are centered on yellow. The brain will clearly see this light as yellow.
We look at an apple and think, "The apple is red". In essence, we treat the color of the apple as a characteristic of the apple. However, this view is only partly correct. The color of the apple is only partly determined by the apple (to be specific, the reflectance properties of the apple). The color of the apple is also determined by the color of the light that shines on it and the biology of the eye that sees the apple. This is demonstrated in Figure 12. This figure shows the spectral curve of a light source (luminance), the spectral reflectance curve of the object, and the cone response of the human eye. Only where all three of these come together does one see color. In other words, an apple is red only when the light shining on the apple contains red wavelengths, the apple reflects red, and the eye detects red.
Let's examine this a bit deeper. What happens if you shine a very saturated, blue light on a red apple? Do you get a red apple? No, what you will get will be a dark gray or black apple depending on the purity (i.e., perfectly blue) of the light source and the purity of the red of the apple. What happens if you shine a white light on the apple, but the person that is looking at the apple is color blind and is missing red cones? The person will see an apple, but it will not be a red apple. It will be some other color. What happens if we look at the apple in dim light? We will not see a dim red apple. We will see a gray apple because the cones do not function in low light situations.
These examples show that the color of an object is not just a function of the object. An object's color is also determined by the light and the visual system of the observer. So, the bottom line is this: the colors of the world are not necessarily as they appear to us. They change when the light changes, and our eyes change what we see.
Okay, so now I have gone and told you that you can not trust your eyes to give you exact color information. It gets worse. Once the brain gets information from the eyes, it monkeys with the data and changes what we see. It turns out that the human brain does not passively receive or passively process sensory information. Instead, the brain does a significant amount of data altering before it allows the human that owns it to perceive. Quite often, the altering that the brain does changes the reality that the human perceives.
Want proof? Consider this. At the beginning of this article, it was mentioned that the optic nerve connects to the eye at the optic disk. There are no photoreceptors at the optic disk. In other words, you have a blind spot in each eye. I'll bet you have spent your whole life without ever noticing that you have two blind spots. Why didn't you notice it? Because the brain interpolated the data that the eyes sent to the brain and filled in the blind spots. Try this little experiment. Place a chair a few feet in front of a wall. Place a small mark on the wall that will be at eye level once you sit down. After sitting in the chair, cover one eye. With the other eye, stare at the mark on the wall without moving your open eye. Now, take a pencil that has an eraser on in. Hold the pencil at arm's length in front of the open eye. Start the pencil at the top of your vision. Slowly move the pencil from the left to the right. When the pencil gets to the edge of your vision, lower it a small amount and move it in the other direction. Keep moving the pencil back and forth across your vision slowly lowering the pencil at the end of each pass. During this time, it is very important that you do not focus on the pencil. Keep your eye locked on the spot on the wall. You will be able to see the pencil as it moves across your vision, but it will be a bit out of focus. If you do this correctly and keep the open eye focused on the spot, eventually you will see the eraser on the pencil disappear. This is because the eraser is now in front of the optic disk. The interesting thing is what you will see instead. You will not see a black hole in your vision where the eraser is located. Instead, you will see the wall. This is the important part. You can not see the eraser because it is in front of the optic disk where there are no photoreceptors. You shouldn't be able to see the wall behind the eraser either. After all, the wall behind the eraser is also in the blind spot. What has happened is that the brain has taken information from the rest of eye and used it to "fill in" the blind spot. In other words, the brain is showing you a patch of wall that the eyes did not see. The brain created the data.
So, the brain can and does significantly alter the data from the eyes. This is important to a photographer because, among other things, the brain sometimes alters the colors that we see. I discovered this years ago when I used to snow ski. I wore a pair of ski goggles that had a very bright yellow lens. When I would put the goggles on, all of the snow covered countryside would turn bright yellow. However, after skiing a short time, my brain adjusted the yellow color out so that everything was white again. In short, I was seeing through a yellow filter, but the brain was adjusting the color to turn everything back so that it appeared normal.
The yellow goggles are an example of one of the most important color principles that the brain follows: color constancy. The brain expects the colors of objects to remain fairly constant. If the colors of objects change during the day because the light that is illuminating them changes, the brain tends to filter out at least part of that color change (as with the yellow goggles). Thus, we may be seeing an apple as red simply because the brain thinks it should be red -- even though the light reflecting off the apple has less red than normal because we are indoors under fluorescent lighting.
Thus, the colors that we see are actually determined by four things: the light illuminating the object, the properties of the object, the visual system of the observer, and the brain of the observer.