Astrophotography, Pixel by Pixel: Part 7
It is with great joy, triumph, and just a little bit of sadness, that I bring to you the final (planned) installment of this blog series. Before we forge on ahead, let us take a moment to look back at the amazing journey we've taken together. And by that I mean take a peek at all of the previous parts which you can find here:
Astrophotography, Pixel by Pixel: Part 1 - Well Depth, Pixel Size, and Quantum Efficiency
Astrophotography, Pixel by Pixel: Part 2 - Focal Ratio Effects
Astrophotography, Pixel by Pixel: Part 3 - Gain/ISO and Offset
Astrophotography, Pixel by Pixel: Part 4 - ADUs and You
Astrophotography, Pixel by Pixel: Part 5 - One Shot Color, Monochrome Sensors, and Filters
Astrophotography, Pixel by Pixel: Part 6 - Dirty Buckets and Calibration Frames
Each article has some great information, but I highly recommend that you read over at least Part 6, as a large part of what follows relies on and is a direct extension of the concepts covered there.
And so for the final time, let's dive back down into the tiny realm of our pixels catching photons and buckets catching rain. Here I'll discuss the final kind of calibration frames to help make your images sing: flat frames. These are slightly different in the sense that instead of improving our SNR, they make for a more uniform and appealing image. They can also take care of any pesky dust spots in your optical train.
Vignetting and Dust Spots
Even though we are still in the realm of pixels, the entire discussion to follow applies to our entire array of pixels instead of what is happening on only one pixel. Vignetting and dust spots can contribute to a less than spectacular final image. These issues, however, are not so much about unwanted electrons making their way into our pixel buckets. Vignetting and dust spot issues contribute to an uneven rate of capture of light along our bucket array.
Vignetting (or more accurately light fall-off) results in the edges and corners of our images being darker than the center. There is some sort of obstruction that is partially blocking the incoming light. Think of a ring of trees around the perimeter of your lawn where all your buckets are laid out. Some rain still gets through, but some is blocked by the leaves and never make it to your buckets to be recorded. The dimmer edges or uneven illumination generally makes for an image that doesn’t look as good as it can.
As an extended aside, there are actually three different types of light fall-off, all of which contribute to uneven illumination. A full discussion would be a bit beyond the scope of this blog entry, but do note that light fall-off is not purely due to physical obstruction. Flat frames will still help correct for all types. If you are interested in learning more about the different sources of vignetting, you can read about it here.
Dust spots, be they on your camera sensor or on your optics, behave in a similar way, but are localized to a particular area. They can show up as little blotches or rings that are darker than the surrounding area. The dust prevents some of the incoming light from hitting your pixels, and so there is a dark spot. This can definitely reduce the quality of your final image.
Let's look at the entire array at once, and take a cross-section of what they capture in a frame. Vignetting and a dust spot might look something like this (mostly exaggerated to see the effect):
Figure 1: Full sensor array light frame
Recall from part 6 all of the different sources of electrons that make it into our pixels that are not coming from the object we care about. So to cover that again here, the different layers are labelled below. In this case, we want to take care of the uneven illumination on the chip, as well as any "divots" from dust.
Figure 2: Full sensor array light frame annotated
Flat Frames
Taking flat frames is the way to mitigate these effects, and to remove the effects of dust and vignetting on your final image. There are many methods to take flat frames, but in general you want a completely evenly illuminated field then take a series of frames that result in your histogram (or pixel wells, or buckets) being filled up by about 1/3 to ½ of maximum.
One tool that can help ensure you are getting very controllable even illumination across your entire aperture is with an Electroluminescent (EL) Panel or something equivalent. You can check out some options that can be sized specific to your telescope after the article below.
Our flat frames will mimic the amount of light fall off across our sensor due to vignetting, as well as map any dust spots. The flat frames will also have read noise since that is present in every single type of frame. They will also have some dark noise, though usually less than our light frames since their duration is shorter. Flat frames will also generally be devoid of any light pollution since we are taking them in controlled situations. Therefore, our flat calibration frames may look something like this:
Figure 3: Raw flat frame
The goal here is to map as accurately as possible the vignetting severity, as well as the location and density of any dust spots. Then we use these frames in pre-processing in order to get an evenly illuminated image by dividing each frame with our flat frame. The light fall off on the edges of the frame, as well as the local “divots” from dust spots are raised up and the general level of illumination is close to uniform.
Pop quiz, hot-shot:
Why don't we want to use the flat frame as it is above to immediately divide our light frames?
Answer:
Recall that every kind of frame has the bias signal present. This includes our flat frames. If we divided as it is now, then we would also be using the value of the read noise that is present in our flat frames. We would therefore have an inaccurate map of how the light falls on our sensor and would result in unusable correction. This means that we need to take steps to first calibrate our flat frames even before we use them to calibrate our light frames.
As indicated in the image above, there can also be some dark current that accumulates when taking flat frames. However, with a cooled camera and short exposure times (on the order of a second or under) not enough dark current is built up to justify the use of dark subtraction from your flat frames. In this case, you will only need to calibrate your flat frames by performing bias subtraction.
If you do require longer exposure times on your flat frames such that dark current does accumulate, then you will need to calibrate them with dark frames. There are two ways to do this. The first is with what are called flat-darks. These are frames that are just like standard dark frames (same ISO/Gain, Offset, temperature as your flat frames) but they will need to be the same exposure duration as your flat frames. These can then be used to subtract the dark current and bias from your flat frames.
If you do take these, as well as your standard dark frames, then there is actually no need to take bias frames at all. This source of noise will be present in your darks and flat-darks, and removed from your flats when you do flat-dark subtraction, and removed from your lights when you do normal dark subtraction.
If you do not have flat-dark frames, and you have flat frames that have dark current that needs to be calibrated out, then there is a different method you can use to calibrate your flat frames called dark scaling. Since our dark frames are usually longer than our flat frames, and our dark current is steady given a certain temperature, we can basically extrapolate what the dark current would have been if they were of the same duration as our flat frames. This is a common method with PixInsight's Weighted Batch PreProcessing script.
So for this procedure our flat frames first have the read noise subtracted out (since we do not want to scale the read noise), then the dark frames themselves will have the read noise subtracted out. Then the dark frames will be scaled down to artificially match the duration of the flat frames. Then the scaled dark frames will be subtracted. Voila! Now your flat frames are ready to be used on the light frames.
In either case, the goal is to basically end up with a flat frame that looks like the profile below in order to accurately use it when calibrating your light frames.
Figure 4: Calibrated flat frame
Bringing all of Part 6 and Part 7 Together
The last series of Paint images below show the effects of each calibration frame on the full sensor.
Figure 5: Raw light frame
Figure 6: Dark frame subtracted light frame
Figure 7: Flat frame divided light frame (fully calibrated)
Our SNR has drastically improved, our light levels have been smoothed out across the entire sensor, and light loss from dust spots have been corrected. However, notice something that is remaining here? That's right, the light pollution either from artificial lights or the moon. Calibration frames cannot help mitigate those sources of unwanted stuff in our buckets. This is why dark skies are still so important for astrophotography, or using special light pollution or narrowband filters.
Takeaways
- Flat frames map the light intensity across your entire sensor
- Flat frames need to be calibrated before their use
- A fully calibrated light frame uses dark frames, flat frames, and often bias frames
- WHY DOES THIS SERIES HAVE TO END?!!
- ... it doesn't just follow this link for more great reads
Man, this is incredibly helpful to understand how various cameras capture, store, and process light. I really appreciate the writing style and simplicity for long-time photographers, but new to AP:) Many thanks!
Hello Mostafa,
When taking flats, it will also depend on how your image capture software is reading out your image statistics. For example, in Sequence Generator Pro, no matter what ADC is on the camera, it automatically scales the results and reads out in 16bit depth for image statistics. If that is the case, then when taking flats you would want to aim for ~33-50% filled, which would equate to about 25000 to 32000 counts. If your capture software reads out image statistics based on the native ADC of the camera, then you would want to aim for ~2000 counts average on your flat frames for a 12bit ADC.
Great series thank you very much. I just wanted to check one thing. Regarding our target ADU, are you saying that when I look at my image statistics, I should be aiming for a mean/median ADU or less than the the full well capacity ADC of the camera i.e less than 16 536 for a 16 bit camera or 4096 for a 12 bit camera?
Great series Jon! It added new insights for me, clarified some concepts, and helped reinforce what I have learned. Your diagrams were very effective to visualize the concepts. The references to Dr. Stark’s articles were also much appreciated to go into even more depth.
Amazing guide and funny title!