Astrophotography, Pixel by Pixel: Part 6 - Dirty Buckets and Calibration Frames

Astrophotography, Pixel by Pixel: Part 6

Welcome one and all to another installment of this landmark blog series! Much of the discussion here once again builds upon the concepts covered in the previous entries which you can find here: 

Astrophotography, Pixel by Pixel: Part 1 – Well Depth, Pixel Size, and Quantum Efficiency

Astrophotography, Pixel by Pixel: Part 2 – Focal Ratio Effects

Astrophotography, Pixel by Pixel: Part 3 – Gain/ISO and Offset

Astrophotography, Pixel by Pixel: Part 4 – ADUs and You

Astrophotography, Pixel by Pixel: Part 5 – One Shot Color, Monochrome Sensors, and Filters 

And so let’s dive on back down to the tiny world of individual pixels and our analogy of buckets catching rain. Without further adu, it’s time to talk about dirty buckets. 

What Actually Fills our Buckets? 

Throughout this entire series, we have assumed that our buckets and incoming light are absolutely pristine. Everything that our buckets gather has only been the light from the object we care about. Sadly, this is not the case in the real world. There are different ways that our buckets get filled with things other than rain water or photons from our target. I am, of course, talking about notorious Noise (there is a difference between unwanted signal and true noise, but for the purposes of this article I am loosely lumping these two things together. What we are focusing on here is distinguishing between the photons and resultant electrons that come from the target we are imaging, and everything else that makes it into our pixel buckets). 

In our extended (to near the breaking point) analogy, noise would partially fill our buckets with things besides the drops of water we care about. There are two primary sources of noise, and a third source that can drown out our hard-fought for data that behaves slightly differently. 

Unfortunately, our pixel buckets are very dumb (but we still love them anyway): they do not differentiate between sources of electrons/water that fill them. All that our pixels care about is filling up with electrons, draining them at the end of each exposure, and sending out the value to be recorded in our frames.

 

Figure 1: The dumb but lovable Pixel

 

We know better. By understanding different sources of photons or electrons that fill our buckets that are not from our target, we can take steps to reduce the effect of the noise. This results in the remaining quantities of our pixels being mostly from our target.

 

    Figure 2: What's actually inside our pixels

 

The general comparison of the quantity of electrons that come from our intended target to the quantity of electrons that come from noise is called the Signal to Noise Ratio (SNR). A significant amount of the game of astrophotography is maximizing the SNR. This results in cleaner images, and we can show more and fainter signal of the target we are imaging. 

So what follows is a discussion on the different sources of noise in our images and the steps we can take to help mitigate its effects on our final images, thus improving the SNR. 

Dark Current or Thermal Noise and Dark Frames 

Dark current or thermal noise are for our purposes the same thing (specifically Dark current is the process by which this source of unwanted electrons are generated, and Dark noise is the actual value of accumulation which is the root mean square of the dark current). This noise is generated when our sensors are on and taking an image. There are two variables that affect this source of noise: temperature and duration of the frame. The amount of noise (recall this is a filling of the bucket that is not due to the light incoming from our target) increases the hotter our sensor is, and it also increases the longer our exposure duration. 

In our bucket analogy, this is like the buckets slowly gathering water when they are in use trying to capture and measure the rain water we care about. It is almost like dew gathering (but do not confuse this analogy here with actual dew forming on our gear. That is something different, and something we want to prevent as much as possible.): more is collected over time, and it can depend on the temperature. 

As the sensor heats up, that heat gets converted to electrons. The more heat there is, the more electrons are generated. A hotter chip is going to create more electrons than a cooler chip. This is one of the reasons why a cooled camera is so useful to astrophotography. Even on relatively warm nights, with a cooled camera that allows temperature regulation, we can push the sensor down to about -15° or -20°C which dramatically reduces the amount of dark current. This ultimately helps with the SNR, which gives us cleaner looking images. 

The creation of these electrons is a rate, such as .05 electrons per second. This is why the duration also matters. Let’s keep that same rate of dark current electron creation and do some quick math for different exposure durations. If we are shooting for 60 seconds, that means that on average, each pixel would have 3 electrons counted in the final output that were not from our target. If we shot for 5 minutes, or 300 seconds, this increases to a total of 15 electrons. 

Now imagine we had a warmer sensor and the rate of electron creation was instead .13 electrons per second. Then with our similar 60 and 300 second exposures, the electrons created would be 7.8 and 39 electrons respectively. A huge difference! In addition to cooling down our camera sensor, there is another technique that can help reduce the effects of dark current on our final images: dark frames. 

These are taken by completely covering the objective of the telescope or putting on the lens cap of the camera lens. In fact, you can even remove the camera from your lens or telescope entirely and cap it. We don't care about or want anything else besides the dark current to be collected by our sensor. 

Then take some frames that are of the exact same duration, same ISO/Gain setting, and same temperature (as much as possible) as your light frames. The goal here is to match exactly the quantity of electrons created from the dark current and record that as a separate image. Since the quantity of electrons generated depends in part on the temperature of the sensor, having a temperature regulated camera helps us ensure that we are matching the dark current as much as possible.

Figure 3: Contents of a dark drame

 

Then, we can take the dark frame, and subtract it from our light frames. This improves our SNR, and so improves our final result. 

 

                             Figure 4: Result of Dark Frame Subtraction

 

Notice here that there was still the read noise in our dark frame and it was subtracted out of our light frame right along with it. What's the deal with that? Well, the exact description of that source of noise and how we can help manage it brings us right into: 

Read Noise and Bias Frames 

Another source of noise, or electrons that make it into our images that are not part of the target we care about, is called Read Noise. This is generated from the process of pixel measurement and draining the pixels. It is introduced in every single frame taken, no matter the type, be it our light frames, dark frames, or flat frames (to be discussed in-depth in a future installment). 

This source is not affected by exposure duration, nor sky conditions, though it is affected by sensor temperature. It is a flat amount of noise that is introduced when taking an image. This means that this source of noise is present in our light frames, our dark frames, as well as our flat frames. 

Luckily there is a way to help mitigate this source of noise in a similar way to reducing dark current noise: bias frames. To remove the effect of read noise, cover the objective of the telescope or put on the lens cap to your camera in the exact same way as the dark frames. Similarly here, you can even remove your camera from your lens or telescope entirely and then cap the camera. Then at the same Gain/ISO setting and temperature, take a series of images at the fastest possible exposure time. This will create frames that are only recording the read noise.

 

Figure 5: Contents of a Bias Frame

 

There is no (or minimal) dark current noise since the exposures are so short. The only thing that is being recorded is the noise or electron counts that are present due to the camera taking a picture at all. Once you have accurately recorded the bias frames, these are then used in pre-processing to remove the effects of the read noise in your flat frames and is also removed in your light frames via dark frame subtraction, thus further improving your SNR, and resulting in cleaner final images. 

These are not directly removed from the light frames in the same way as the dark frames are applied.  Can you see why this is the case based on the discussion above? 

Recall that the read noise is present in every type of frame taken. This includes the dark frames themselves. Therefore, when we take our dark frames and subtract them from our light frames, we are actually subtracting the dark current and the read noise at the same time. It’s a two-for! 

Sky Glow and Light Pollution 

Recall that our pixel buckets do not make any sort of differentiation between types of incoming light that they capture; all of it is the same to them. A large source of photons that make it to our pixels that is not from our intended target is ambient light glow. This generally has two different sources: light pollution from electric lights and the moon. 

While you may be unlucky with the amount of light pollution in your region, it is lucky that this effect can be demonstrated in our persistent analogy. Recall that we are imagining that our sensor is like an array of buckets out on the lawn capturing rain to record how much rain fell directly in the area that particular bucket covers. 

Light pollution from electric lights would be like a very inconsiderate neighbor leaving their sprinklers on while you have your buckets out on the lawn. The water from the sprinklers makes it to your buckets as well, and the quantity captured from one source or another is completely indistinguishable. The water that you care about gets mixed in with the water from the sprinkler. And there is no worse neighbor with leaky sprinklers than the full moon. 

There are two ways to mitigate this source of unwanted water: the first is to image from a dark sky location when there is no moon out. The best way to improve your SNR in this regard is to get away from the source of the pollution! If all settings, including exposure duration, are the exact same, then a dark sky site will result in less sky-glow noise reaching our pixels.

Figure 6: Different Amounts of Light Pollution

 

The other option is to use a filter that can distinguish between the different sources of water or light. This would be like setting a special screen over the top of all your buckets that only allows real rain water through instead of water from your neighbors. A more robust discussion of the operation and effects of filters can be found in part 5 of this series, which for extra convenience I’ve linked to again here (there is a link to all the other parts of this series at the top of the article). 

Putting it all Together 

So let’s go back to our very first image of a pixel and see how all of the concepts above play out. We have a pixel that is filled to a specific quantity of electrons. That is all the pixel knows. It has no idea if the quantity comes from the target we care about, dark current, read noise, or sky glow from the moon or light pollution. 

Since you are smarter than the pixel (I’m not too sure I can make that claim about myself…yet), we can very closely mimic the quantity of electrons that are created by these sources that are not the target we care about.  We can take some dark frames and use those values to subtract from each image. This by itself will remove the dark current and the bias signal. We are left with far more clean data, better SNR, and therefore a much improved final image.  

These calibration frames are useful to create the best looking images. Application of all of these steps is actually done very easily and nearly automatically with calibration and stacking software such as Deep Sky Stacker, Nebulosity, or PixInsight. 

There is one more important kind of calibration frame: Flats. Our next installment in this series will dive into the effects of these frames, how they interact with dark and bias frames, and some techniques on how to take the flat frames. Stay tuned!

 

Takeaways 

  • We want to maximize Signal to Noise Ratio (SNR)
  • Signal comes from our target, Noise comes from many different sources
  • Dark and Bias frames help improve our SNR
  • Dark skies or filters (light pollution specific or narrowband) are your best friend

You can read the next, and final, installment of the blog series here.

6 comments

  • Al -
    Thanks for your question! Yes, you must take flats for each binning separately. This is because the geometry of the images must match. So, if you bin 2×2 for color and 1×1 for Lum, you should take your flats in those corresponding bin settings for those filters. It is worth noting, too, that even if you take images all binned 1×1, you should be taking a series of flats for each filter used.

    Matt Dahl
  • I use a monochrome CCD and LRGB and NB filters. Usually L and NB filtered subs are binned 1 × 1 and RGB subs are binned 2 × 2. Must I take flats and darks at both binnings for these filters or can all of my flats and all of my darks simply be taken at 1 × 1?

    Al Ryan
  • Hello Dana,

    Thanks for your question! We are still getting everything back to a more normal flow after NEAF (Matt and Stephanie had a blast!), but part 7 is on its way soon and covers some of exactly what you are asking. One thing to note here is that calibrating your data with dark (and bias) frames and with flat frames use different operations, which is why we need to remove the bias signal from our flat frames before we use them in calibration.

    The short answer is that dark current and bias signal are being subtracted from our light frames, whereas our light frames are being divided by our flat frames. This fundamental difference is why our flat frames actually themselves need to be calibrated before we use them, and the bias signal removed. Otherwise we would be dividing our light frames with the bias signal as well! Not good! :) The full discussion of the operation and parts of a flat frame with the usual style of figures and diagrams are in the next part. Stay tuned!

    Jon Minnick
  • Jon,
    I am jumping ahead here, but I want to talk about flats a bit. This article left the bias frames abandoned since the read bias is being removed at the same time the dark current noise is being removed. So, clearly (or it is now clear to me) that the bias frames will be used to remove the read noise from the flat frames. In this manner when the flat frames (less read noise) are removed from the target frame we aren’t subtracting the read noise twice, which is what would happen if both darks and flats were removed without pre- processing at least one of them.

    It seems that removing read bias from flats is an added step. What if the read noise values were subtracted from the darks in the production of your master dark frame? This would mean that the dark frame would remove only dark noise and then removing the flat (which includes the read noise) would remove the light pollution and the read noise at the same time.

    Your thoughts?
    Dana

    Dana Bowdish
  • Hello Dana, I’ll tackle the questions in the same order:

    1) You will want to take your dark frames to match the ISO/Gain, exposure duration, and temperature of your light frames. For each different exposure duration or ISO setting, you will want 25-30 dark frames to combine into a single master dark. For example, if I was imaging at 800 ISO for 90 seconds, I would want 25-30 dark frames at 800 ISO and 90 seconds. If I then imaged a different target at 1600 ISO for 120 seconds, I would want 25-30 dark frames at 1600 ISO and 120 seconds to use for calibrating those light frames.

    2) If I understand you correctly, that is the case. You will want the bias frames to be of the same ISO, and the shortest exposure possible, and also ideally near the same temperature (though this is not quite as vital since the the exposures are short enough that they will not build up dark current). Having a set of Bias frames or the master bias for different ISOs will have you setup to use them for any exposure setting you take down the road.

    3) There are many options to make flats, using an artist’s tracing panel, sky-flats, or T-shirt style flats. You will want as uniform of an illuminated surface as you can get. Then set your DSLR to aperture priority, and tweak the exposure duration until the histogram is filled to about 25-33%.

    More specific details as to what the flat frames do will be in Part 7, which should be posted in the near future once we are all caught back up from NEAF.

    Jon Minnick

Leave a comment

Please note, comments must be approved before they are published