Astrophotography, Pixel-by-Pixel: Part 1
A successful astro-photograph is an image that you are happy with. Reaching that point can be as straightforward as snapping a picture with your phone through an eyepiece or be as involved and complicated as you want to make the process. One aspect that all types of digital astrophotography share is the relationship between the light from the celestial object you are imaging and the pixels on your camera. Below is part one of a series about a way to think about what is happening on a pixel-by-pixel basis during imaging, and the effects on your image based on what happens to the pixels.
BASICS OF PIXEL FUNCTION
Every digital photography sensor is composed of an array of pixels that gather incoming light and convert the information into an image. To get a detailed understanding of what happens during an image capture, it is useful to zoom way, way in and look at an individual pixel. It can also be useful to think of a pixel like a bucket (often called a “well” in many other resources and discussions), but instead of gathering sand, water, or Legos, a pixel gathers electrons. Think of if you left a bucket outside overnight during a light rain; over time the bucket would gather individual raindrops and start to fill up. In our case the raindrops are photons, and the bucket is each of the pixels on our camera. This analogy and other relationships between your pixels and a bucket can carry us quite far in understanding what is going on with your pixels in astrophotography. So, here is your bucket:
(Figure 1: Base-line Pixel)
You might have noticed a switch in terms above: we are concerned with gathering photons in astrophotography, yet the pixel at its base is collecting electrons. What’s going on here? On your camera’s sensor, incoming raindrop photons (individual packets of light) from the object you are imaging hit the photo-receptive photo-site and get converted to electrons via the photoelectric effect. As more photons hit the photo-site, your bucket starts to fill with electrons.
(Figure 2: Photon to Electron Conversion)
After a given amount of time, which is your camera’s shutter speed for the frame you are taking, your camera reads the amount of electrons in each bucket and drains the bucket so it is ready for the next frame. The number of electrons drained determines the intensity or brightness value of that pixel; the more that are drained the brighter the value for that pixel. The gathered electrons create a voltage differential in the bucket which is an analog signal. Once the camera drains the bucket, this gets changed into a digital signal through the on-board Analog-Digital-Converter (“ADC,” more details on this converter and imaging in a later part of this series).
(Figure 3: Intensity Levels)
Once all of the buckets from the sensor are drained and their values are recorded in a digital signal, the result is a single frame. This process and the general theory behind it is the exact same for every digital camera in use for astrophotography, be it a DSLR, dedicated CMOS, or specialty CCD. Now we can start to dig into what effects there are for different cameras with different specification for their Pixel-Buckets.
WELL-DEPTH OF PIXELS
An easy way to distinguish between different buckets is the variation of the physical size of the bucket. A common difference is the depth of a bucket. Even if the opening at the top of the bucket is the same, there can be a difference in how tall it is. This affects the amount of rainwater or sand that the bucket can hold: a deeper bucket can hold more stuff before it is full. This is the exact same with pixels, and is commonly called the “well-depth” of the pixel.
(Figure 4: Pixel Height Differences)
Once a pixel-bucket is full, it cannot hold any more electrons, and so any further incoming photons are rejected (There is a nuance here based on sensor type. On older model CCDs, any extra photons would actually bleed over into adjacent pixels, called blooming. Many modern CCDs have anti-blooming gates to help prevent or reduce this. On CMOS style sensors, the pixels do not transfer charge to one another, and so additional incoming photons beyond a full well are actually lost) . This is commonly called “saturated” or “blown out” and is something to avoid in astrophotography. Every photon is precious, and if our bucket is full and rejecting it, then we are losing data, and not representing fine differences in detail brightness. Imagine that the full-well depth value on our tall pixel-bucket is 100 electrons and the full-well depth value on our short pixel-bucket is 50 electrons:
(Figure 5: Well Depth Capacities)
Now assume that everything else remains constant in this example (such as noise amounts, Gain/ISO settings, exposure length, focal length and aperture, and that our buckets capture every photon, etc. etc.) and over a particular amount of time 75 photons hit each of our pixel buckets. The result would be that our tall bucket captured all of them, while the short bucket captured 50, was then full, and the remaining 25 were rejected.
(Figure 6: Effects of Saturation)
Oh noes! We lost data, have a saturated/blown-out pixel, and should have had a shorter exposure. The practical result in differences of pixel-bucket well depth (all other aspects of our setup being equal) is that deeper wells allow us to gather more photons for each frame, which means we can have longer exposures, and longer exposures improve our signal to noise ratio.
DIFFERENCES IN PIXEL SIZES
Ok, the next physical feature that can change on a bucket is the size of the opening of the bucket. This has two different effects for purposes of astrophotography. If we have a bucket with the same depth but a larger opening, then the first result is that it will fill up faster; there will be more raindrops that make it into the bucket for the same amount of time (a slight difference here between our pixels and the bucket is that a larger bucket has a higher volume capacity, while a pixel that is larger in area can have the same well-depth, and so hold the same quantity of electrons while more photons hit the pixel. This is what allows a larger pixel to actually fill quicker). The other result is that a larger bucket results in a lower resolution of a mapping of the amount of rainfall hitting a given area.
(Figure 7: Larger Pixel Size Effects)
Imagine it this way: if you have a larger area opening on the bucket, then in exchange for gathering more total water, the fine differences in the amount of rainfall in different areas within the bucket itself is lost. If you have smaller and smaller sized bucket openings, then this resolution is regained, and you can measure the differences in the amount of rainfall on smaller and smaller scales on the lawn. If a bucket has a 12” opening, then you will be able to measure the amount of rainfall that fell over every 12” of area on the lawn. If a bucket has a 3” opening, then the total volume of water gathered in each bucket will be less, but you will be able to measure the differences in rainfall that fell over every 3” of area on the lawn. This would also allow you to measure any differences in the amount of rainfall within that 12” bucket that would otherwise all get gathered into a single measurement.
(Figure 8: Smaller Pixel Size Effects)
All of this is exactly the same in regard to your pixel size and incoming photons: the larger the pixel size, the faster you can gather light at the cost of not having as much fine detail and picking out variations in light intensity of your object. The smaller the pixel size, the slower each pixel will gather light, but you gain fine detail and can pick out smaller variations in light intensity of your object.
QUANTUM EFFICIENCY OF PIXELS
There is another important physical characteristic of our pixel-buckets that is different between various cameras and sensors: the efficiency with which our pixels gather incoming photons. This is called the Quantum Efficiency (QE) of the sensor. Don’t let that “Q” word throw you off. This is just the rate at which a pixel actually captures incoming photons, on average, as a percentage. It is unfortunately not the case that our buckets catch every photon that hits it. Some are just lost and that is the way of things.
The QE of a sensor is a fixed value that depends on the sensor manufacturer’s design choices and chip technology, all of which are FAR beyond the scope of this article and series. Back to our bucket in the rain analogy, imagine that for stability purposes, one bucket manufacturer chose to install a lattice over its opening. This would result in the bucket not gathering every raindrop that would fall over its opening: some raindrops would hit the lattice and get dispersed, bounce out, or otherwise just not get collected by the bucket.
(Figure 9: Quantum Efficiency Differences)
The Quantum Efficiency is averaged out over the entire sensor and is the chance that a photon will be captured and converted to an electron. This also means that there is slight variation of the actual number of photons captured between different frames of the same object. Let’s suppose for simplicity that for a given image 100 photons hit a particular pixel-bucket. If we again keep everything else constant in our system and image the same target over two different frames, then in the first frame, our pixel might have captured 83 of the photons, and in the second frame our pixel might have captured 87 of the photons, thus averaging out to 85% efficiency.
(Figure 10: Effects of Quantum Efficiency)
There is no way to increase (nor decrease) the sensitivity of your camera sensor. Increasing the ISO or the Gain setting on your camera does not increase the sensitivity. I repeat, altering the ISO or the Gain setting does nothing to the sensitivity of your sensor. The sensitivity is nothing other than the QE value. The actual effect of these settings will be covered extensively in the next part of this series.
An excellent resource to check the well-depth (how tall the bucket is), pixel size (the size of the opening of the bucket), and quantum efficiency (how effective the bucket is at collecting rain-drops) of many widely available DSLR cameras can be found at www.photonstophotos.net. We also offer many dedicated astro-cams with different characteristics which can be found below.
TAKEAWAYS
1) Pixels gather incoming photons and convert them to electrons to convert to an image
2) Larger well-depth allows you to catch more photons in each image
3) There are trade-offs with larger or smaller pixel sizes
4) Aim for as much QE as you can find
There is plenty more ground to cover with our bucket and pixel analogy. In future installments, we will cover focal ratio effects, gain and ISO effects along with what is really happening with the ADC, how to handle things that you don't want in your bucket, and recommendations to match some cameras to your imaging setup.
Stay tuned for the remaining parts of this series. Good luck out there with your buckets!
You can continue reading the next part here.
This article really helped me understand basic pixel characteristics…thanks. One suggestion – in the Differences in Pixel Sizes section you use an example with both 3" of area and 12" of area. As I’m sure you know, inches are not a measure of area unless, of course, you specify that the 3" and 12" are the diameters of the buckets. You might want to make this clear.
Thanks for this – been doing astrophotography for a couple of years and couldn’t quite grasp the concept of full-well, etc, etc. Learning this through your analogies has help immeasurably.
In searching information on astroimaging CCD and CMOS cameras (currently using an un-modified D5600 DSLR and ED100 OTA), I found your Pixel-by-Pixel blog articles… and reading them, I found them an excellent and comfortable way to adsorb the concepts using your bucket and ping-pong ball analogies. Great stuff and I thank. you for making it easy to learn; pixel functions, ADC functions and filters now make much more sense…. and now I better understand why you indicate that narrowband pass filters are really for the CCD monochrome camera sensors.
Thank you very much,
Bob
This was … um… illuminating. Easy to follow and makes sense as I start my quest towards a good ap scope and camera combo. On to the next article!
Well depth vs. pixel size is confusing to many people. An easy way to understand is to convert specs to electrons / square micron. Reality is more complicated than a simple translation, but in general terms it is still useful. For example, the KAI29052 has 5.5 µ pixels at a well depth of 20,000 e. The KAI43140 has 4.5 µ pixels at a depth of 13,000 e. This looks like a huge difference in well depth, but if converted to depth per µ^2, the 29052 rates 661.2 while the 43140 is at 642.0. This means that the slimmer buckets of the 43140 won’t fill up very much faster than the fatter buckets of the 29052.
Equally important to the discussion of pixel size and well depth is the trade-offs between resolution (as mentioned above) and dynamic range. A well depth of 65536 is effectively 16-bit and takes full advantage of those 16 bits. By contrast, a well depth of 16384 needs only 14 bits effectively sacrificing 2 bits worth of dynamic range.
I am loving this blog series, and I’m only getting started.