This wonderful photo was taken by Steve Tuttle. It is of an emission nebula designated IC 2177, and sometimes called the Seagull Nebula. It is actually a portion of a much larger region of excited gas. This is a region of space where stars are forming. The most massive stars form quickest. But, they also emit a copious amount of ultraviolet light. This UV light ionizes the gas in the vicinity of these new stars, and that gas then begins to glow as the atoms reacquire their electrons.
But, notice that Steve’s image looks far different from the other image that I linked to. That image of the larger region of space shows far more reddish colors, and even some blue right near the nebula. But, Steve’s image looks far different colors. Steve also has another image on his site of the same nebula, and I have reproduced it here, as well:
These are clearly images of the same object, but they look markedly different. So, what gives?
The answer is in how the images are processes. To understand this, we need to understand imaging. There are multiple processes at work in this part of space producing light. The dominant mechanism may be the electron capture and de-excitation of hydrogen, but this part of space contains other elements besides hydrogen. So, these other atoms are doing similar things to produce light. But, each element, when it produces light by de-excitation, produces only certain colors of light. These colors mix together to give a visual appearance to the nebula.
But, now we need to understand how we see light. Your eye perceives color using specialized cells called cones. There are three types of cones that detect light. Each type detects a different range of colors. One type of cone detects mostly bluish and purple light. Another type is typically designated as seeing green light, but it really detects light from blue to reddish-orange. The third type, generally designated as seeing red light, sees from red to green. The red and green cones overlap significantly, and it is a problem with one or the other that can give rise to red-green color blindness. Note that most red-green color blind people can see plenty of colors, but they see red and green light using the same cones and so those colors would look pretty much alike to them.
Imaging systems work in a very similar way. Color film uses three emulsions sensitive to three different ranges of light that cover the visual spectrum. These emulsions, though, typically don’t record all colors of light with the same efficiency, or with the same detection bands, as the cones in the eye, so color pictures often don’t look as if they accurately portray colors. Some colors are more vivid and some are less so than seen with the human eye. Professional photographers know techniques to adjust for this effect. With digital images, the detector is a CCD (Charge Coupled Device) rather than film. The problem with CCDs, though, is that they are not color sensitive. So, how do you get color pictures with CCDs?
There are basically three options for getting color CCD images. The first method used was to simply take three images using three different color filters. You could use, for example, a red filter, green filter, and blue filter. The images would then be what the object looks like in these colors. The computer can then process the images by stacking these colors onto one another, giving a color image. There have been a lot of discussions about this process, and I won’t go into them all. Most of the discussion has been about whether the color filters used should be those that give the most scientific data or whether or not they should match the color range of the cones. The real problem, though, is how the computer displays these colors, because light is really not just three colors. A color having a bit longer or shorter wavelength than the central peak of the filter will be equally dimmed. The CCD and computer don’t know whether or not the dimmer image is bacause it is not so bright, or because it is a color off of the peak. All three look the same to the CCD through the filter. The color is then portrayed as simply the color near the peak of the filter. So, displaying images in that way inherently alters their appearance. Still, the color representation is close enough to what you see for most people to be happy. And, of course, we are used to this way of doing things. That is how television sets display colors.
But, for digital cameras, it is tough to get three images of the subject with three color filters. So, digital cameras have to use a different technique. One way of doing that is to split the image into three colors using a prism. Then, three CCD chips are used to look at each color. This is a very expensive way of doing things, because it involves precise optics and three CCD chips. It also requires quite bright images, since the light is being split. Consequently, this method is seldom used. The third method is to simply put tiny microdot color filters on the CCD chip. That makes the chip much more expensive to make, but cheaper than three chips. So, only a third of the pixels see through each color filter. The electronics then add the images together to produce a single color photograph, in much the same way that images are made using three images taken through three different color filters. The major disadvantage of this method is that it effectively reduces the pixel resolution of the CCD chip by one third. However, you can take a single picture and get a color image. This is how most color CCDs work.
Astronomical objects seldom change much between exposures. And, since resolution is generally important, we usually simply take three images with three filters. That has the advantage, also, of having three separate images that you can then do all sorts of things with electronically when you add them, enhancing certain colors and bringing out extra detail in certain parts of an image. But, of course, that comes at the expense of “true color” images.
But, do you really need to make the images true color? There is often information that can be gained by making one or another part of the image dominate. For example, if you look at a nebula in one wavelength, say that given off by hydrogen as it deionizes, then you can see where the ionized hydrogen is in the nebula. And, you can even use different spectral lines to represent different colors. That is often done to produce the amazing images that you see from professional observatories or space based telescopes. It is important to note that these are virtually always false color images. A common color scheme is the Hubble palette (since many HST images are released using it). The Hubble palette is what Steve Tuttle used in the image at the top of this posting. The Hubble palette uses images from three narrow line filters, S II, HÎ± ,and O III, to be the red, green, and blue parts of the image.
S II is an ionized sulfur line. The S II filter allows only light from near 672.4 nm wavelength pass (1 nm =10-9 meters). This is a deep red color. The image taken with this filter is the red part of the Hubble palette.
The HÎ± spectral line, though, is also red. It is centered at 656.3 nm. That is most definitely red in color. However, the Hubble palette assigns this image to be the green part of the final image. Remember, the CCD does not really detect different colors, only different intensities. So, it doesn’t know whether the light making the image is red or green. And, the computer doesn’t care. So, it can display this second image as any color that it wants.
O III is doubly ionized oxygen (two electrons removed). O III filters generally pass two nearly spectral lines at 495.9 nm and 500.7 nm. Light of this color is green. So, O III images come from green light. However, in the Hubble palette, these images are displayed as blue.
When you put all of these colors together, then the image of IC 2177 looks like it does in top image. But, the Hubble palette is most definitely false color. Two slightly different red images make up the red and green parts of the image, and a green image makes up the blue part of the image (thanks to the wonders of computer processing). So, effectively, about half of the visual spectrum is stretched in color to cover the whole spectrum. But, the images produced in this manner are quite spectacular and very pretty. For this reason, they have been widely circulated, and so many people are familiar with how many celestial objects appear in this color scheme. But, many astrophotographers don’t really like the Hubble palette, because the images don’t look anything at all like the old images taken with film cameras. Those images had HÎ± as red and O III as green. A lot of people have seen these objects using just those filters, so the Hubble palette just looks weird.
But, the Hubble palette is not the only choice for displaying colors. Another color mapping scheme is the CFHT (Canada-France-Hawaii Telescope) system. CFHT uses HÎ± to be the red part of the image and O III as the green part of the image. This produces images that more closely match what people have seen before of just the HÎ± and O III parts of the image in their natural colors. The second photo above uses the CFHT palette. But, the CFHT system also uses S II. The S II image is the blue part of the final image. But, S II is actually also red light. So, that means that the final image still doesn’t look the way that it does if you could actually see it.
So, you might ask, why not use a filter for blue light? Well, one advantage of both of these systems is that they use narrow band images. That means that they look at light from only one wavelength, or very near that one wavelength. That means that you reject other wavelengths of light, including a lot of stray light produced artificially (light pollution) or naturally (moonlight). Thus, you can take cleaner images. Also, these wavelengths are from materials naturally found in interstellar space, so they are common in emission nebulae. These images also reject reflected light from other stars in the vicinity of the nebula, so you can see just the nebular structures. But, you might press the matter and ask why not use a narrow band filter for something emitting blue light. Well, the problem there is that there simply are not as many elements common in the interstellar medium that emit much blue light. Hydrogen can, and does, emit spectral lines in a very deep blue (almost purple) color, however the red emission by far dominates. And, an advantage of using different elements is that they can often dominate in different parts of the nebula. So, the blue hydrogen image would likely look just like a dimmer version of the red hydrogen image and would not produce any difference in the image that you couldn’t get by just adjusting the hue of the HÎ± image, whatever color it may be. Mercury has spectral lines in the deep blue and purple, but mercury vapor street lights also emit those same spectral lines, so the sky is awash in those colors. Whatever you are photographing would simply be washed out. Besides, there is not very much mercury in the interstellar medium. Another candidate might be molecular nitrogen, H2. But, that is also a problem, since the nitrogen in our own atmosphere would absorb that color. And, you would not expect to find molecular nitrogen anywhere in the vicinity of ionized hydrogen or sulfur. So, I guess that these false color images are probably more interesting scientifically.
Anyway, even if the images are not true color, they are very beautiful, so enjoy them.
Images permission of Steve Tuttle’s Astrophotography