Dark, flat & bias frames - why I don't use them

   

It worries me that some introductory texts put beginners off by jumping straight into the calibration of images using dark, flat and bias frames when in reality they are not important for obtaining spectacular photos. This page explains why they are not important - unless you wish to do photometry (measure brightnesses, eg of variable stars or comets). My advice is to get some basic experience without them first.

My deep-sky astrophotographs are always taken at the maximum standard ISO setting of my camera (the highest non-extended one: the extended ones amplify the signal in software after digitisation, which would reduce the dynamic range available). In dark skies, away from light-polluted towns and cities, with the camera at the prime focus of my 254mm Newtonian telescope, ISO 6400 enables exposures of the order of a minute or more before the background sky level becomes too high.

The drive of my HEQ5 equatorial mount is accurate enough that with the camera at the prime focus of the telescope I can take exposures of up to about 1 minute without the stars moving as much as the diameter caused by typical atmospheric turbulence. If I am using the camera with a telephoto lens to get a wider field of view than the telescope, I can expose for more than 2 minutes before drive inaccuracies would become noticeable.

 

 Noise

Images taken at the highest ISO sensitivity settings, which we need for faint deep-sky objects, are inevitably noisy (like grainy film). There are two main types of noise in a Digital Single-Lens Reflex (DSLR) camera: thermal noise and fixed-pattern noise (for more detail see Noise, Dynamic Range and Bit Depth in Digital SLRs, Emil Martinec, May 2008).

The stacking process (eg, in GRIP) deals with the two types of noise in different ways.

  1. Thermal noise, as the name suggests, is worse at higher temperatures. It is due to the random thermal motions of electrons. The good thing about it is that it is random, so it averages out in a long total exposure time. The most important point is that the averaging occurs regardless of whether that is one long exposure or the sum of many shorter ones. The total level of the noise rises with time but the stacking program deals with that by stacking into a deeper image in the computer's memory: instead of the camera's 14 or 16 bits per pixel per colour channel we use 32.

    DSLR manufacturers now claim to digitise 14 bits in each colour, giving a possible range of pixel values from 0 to 16,383 (that is, 214 steps). By stacking into pixels with 232 possible values (more than 260,000 times as many) we avoid hitting the upper limit as values are added.

  2. Fixed-pattern noise is the same on every frame. It can be dealt with by making dark frames (see below) and subtracting them from the image. I will show that it can be dealt with in another way too.

I do not recommend using in-camera noise reduction. It is not designed for astrophotography but for "normal" photography in low light levels. It is therefore likely to smooth away faint details which we are wanting to detect. At best it is also a waste of precious observing time and our long total exposure time and stacking process will do the job anyway.

I have traded up through newer Canon models over more than 10 years and I have seen the fixed-pattern noise becoming less and less noticeable, to the point that it can now often be ignored.

 

 Dark frames

Dark frames are exposures of the same length as the real ones but with the lens cap or body cap on the camera so no light gets in. At least the same number of exposures must be taken as the real ones and then averaged together. Otherwise thermal noise would be reintroduced into the image. Ideally dark frames should be taken at the same temperature as the real exposures. The weather in this country is such that observing time is extremely precious so it is preferable not to waste it on acquiring dark frames but to do that at another time.

 

 Flat frames

Flat frames are exposures of uniform white sheets which record pixel-to-pixel variations due either to variability across the detector or to lens factors such as vignetting in the corners or spots and rings due to out-of-focus dust. Flat frames MUST be taken at observing time because the camera must not move in relation to the rest of the optics between taking real data frames and flat frames. More details about obtaining flat frames are given in a later section.

 

 Bias frames

Bias frames aim to correct for deliberate small offset voltages designed in by some camera manufacturers so that the zero-light level (no photons counted) is above zero volts; the noise then alternates both above and below the bias level instead of being truncated at zero. By taking the shortest possible dark frame exposure (1/8000s in my camera) and looking at the modal values (RGB) of the histogram (eg, in GRIP) you can tell whether there is a bias. In my camera, at whole-stop ISO settings (800, 1600, 3200, etc*) the modal values are either powers of 2 or sums of a couple of such powers. I think that indicates design. So I deduce that Canon cameras do deliberately have bias offsets. It is rumoured that Nikon cameras do not. Camera manufacturers seem neither to confirm nor deny such details.

 Why bias frames are not needed at all

Any bias is present in the dark frames as well as the real data frames and therefore is subtracted when the dark frames are subtracted. So there is no point in taking bias frames as part of the standard process. (CCD cameras are different and it is from them that the term bias frame comes.)

* The reason I specify whole-stop ISO settings is that it is believed that software amplification after digitisation is used for half- and third-stop settings. That reduces dynamic range and so should be avoided. This is another detail that is hard to confirm.

 

 The standard stacking process

The textbook stacking process is as in the following diagram.

Having already suggested that beginners should not worry about many details of the process I will go on to demonstrate that certain aspects of the overall photography-plus-stacking procedure reduce the need for them anyway.

The diagram is simplified because strictly you need to average two sets of dark frames:

The point is that the dark frames must be exposed with the same camera settings (and at the same temperature) as the frames they are correcting. Flat frames would be taken at much lower ISO (to reduce noise) and exposure duration (because there is ample light) than the real data frames.

   

 RAW images

RAW images from a DLSR look like this when magnified so that the pixels become visible:

The pixels on the CMOS detector chip have coloured filters in front them, Red, Green and Blue. To make a rectangular array there are twice as many Green pixels as Red or Blue ones. Reading a RAW image file involves interpolating across the pixels to fill in the gaps in each colour channel. To get the Red value on a pixel which really has a Green filter it is necessary to interpolate across it from neighbouring pixels that have Red filters.
(I am using capital initials here as a reminder of the abbreviation RGB.)

This arrangement of filters was invented by Bryce Bayer in 1974, before there were any digital cameras. Here is a short article about it.

 

 Hot pixels

You will probably have noticed in the RAW image above that there is an isolated bright red pixel towards the bottom right. This is a defective pixel that claims to have a full photon count when it almost certainly has not. This type of defect in the detector chip is commonly called a hot pixel. It is likely to be the same in every frame. Other pixels may be defective in less obvious ways.

Digital cameras typically have several defective pixels. Obviously manufacturers try to keep them to a minimum (and never mention them in user documentation) but if they rejected all but the absolutely perfect detector chips, assuming that is achievable at all, the cameras would be far too expensive - astronomically priced you might say.

 What happens really

Imagine putting the camera either on the eyepiece end of a telescope or fixing it directly on an equatorial mount, equipped with an ordinary photographic lens. We switch the mount on, so it drives at sidereal (that is, star) rate to compensate for the Earth's rotation (assuming the mount has previously been aligned on the celestial pole). Having focussed and put our target near the middle of the field of view we take a large number of half-minute exposures.

On examining the images we find a couple of problems.

Firstly the stars are not in the same position on every frame because our motor drive is not perfect. What are the options?

The essential job of the stacking software is to detect the stars in each image, match their patterns between frames, and shift (or better, particularly for wide-angle non-telescopic views, do a rubber-sheet warp) to superimpose the stars before adding the frames together in that extra-deep (32 bits per colour channel) accumulator image.

Secondly we notice that there are a number of quite prominent red, green and blue single-pixel dots in the image. These are due to defective pixels. Send the camera back? No camera is perfect in this respect, as we have already discussed. As well as the bright dots (hot pixels) there will inevitably be some less obvious defective pixels scattered across the image too.

The textbook stacking process shown above is intended to deal with defective pixels because they will also be present in the dark frames and so will be subtracted from the real frame. But that may leave some black dots where we really do not want them: on stars, for example.

 Why dark frames are unnecessary

However, the combination of the two problems (inaccurate drive and imperfect pixels) can be exploited to our advantage. We have said that the stacking software will align the stars. In so doing it will move the bad pixels so they do not occur in the same place every time when the frames are added. In fact they will appear to wander around fairly randomly and thus be enormously watered down in the accumulated result (their brightness multiplied by the reciprocal of the number of frames).

It is worth noting that the same thing happens to the fixed-pattern noise of the detector chip, so it is smeared out and its variations are watered down.

The movement between frames also ensures that we sample the point spread function randomly rather than at whole-pixel spacings.

 Why guiding systems can be more trouble than they are worth

If we had used a guiding system it would have been necessary to make deliberate position offsets between frames to ensure the same result. This has to be done for such advanced machines as the Hubble Space Telescope too.

 The stacking process in practice

So why is it better not to take a single long exposure?

   

 Obtaining flat frames

We have seen that bias frames are absolutely not needed for DSLR work and dark frames are pretty much a waste of time unless you want to measure brightness from your images (photometry).

You may want to use flat frames, in two circumstances in particular:

A flat field (or frame) is a photo of a uniformly illuminated white area, completely filling the frame. This must be photographed with exactly the same optical arrangement as the images for which this is to be used to correct. Although it is an image of a flat field, the flat field image will not generally be flat (ie, of exactly uniform brightness) itself. The point is that it records variations of brightness due to the optical instrument and varying pixel effectiveness. When photographing through a telescope there is likely to be vignetting and that may be quite severe in the corners. In configurations where the camera is looking through a telescope eyepiece there will generally be all kinds of blobs of dust on numerous optical surfaces causing problems because they are partly or sometimes wholly in focus. As long as the flat field has been obtained with the same eyepiece and nothing has been rotated or refocussed, the real image may be corrected by dividing it by the flat field image. The division process brings all pixels up to the same background level, so the result is similar to that of background correction for non-telescopic photos. If using a high ISO sensitivity in the camera, the flat field image should be an average of several photos. However, the flat field image does not have to be photographed at the same ISO setting as the images to be corrected, so it may be possible to avoid averaging flat fields.

So how can we obtain flat field images? I use an cardboard box, roughly cubic in shape, slightly wider than the diameter of the telescope. A large circular hole in one side enables it to fit over the end of the telescope. Four torch bulbs in the corners, just inside the hole, are powered by a small battery. The inside of the box is covered with plain sheets of white paper. That is very easy to use and very effective when the resulting image is used in GRIP. It even corrects those pesky specks of dirt on the camera detector itself, a common problem with DSLR cameras.

Here is a picture of my flat field box. You can see 1 of the 4 lights inside it. The circular hole fits exactly over the end of my telescope.

Note that the inevitable lines in the corners of the box, or indeed any other small marks, are not a problem because they will be way out of focus when seen through the telescope.

Here is the box mounted over the front of my telescope.

In practice I rarely use flat frames and almost never use dark frames.

 The medianiser

In the description of the stacking process above we saw how the hot pixels are watered down: because they wander about from frame to frame their values are divided by the number of frames in the sequence. There is a process available in GRIP which can do even better, omitting the hot pixels altogether. Unfortunately it is a much slower process than the standard method of stacking.

The improved process can be performed as 2 distinct steps in GRIP, as follows.

  1. The first step is to align the images as if going to stack them but instead saving each warped/shifted image as a new file. This process is available on the Batch/Astro menu: "Astro warp/shift onto common basis". The resulting files have the stars aligned, so the hot pixels (and fixed noise pattern) will generally have moved around.
  2. Then, again on the Batch/Astro menu, use the option "Create median pixels image". For every pixel in the image this makes a list of its value (or values: red, green and blue separately) in every aligned image of the sequence. It sorts the list and takes the median value as the resulting value(s) for that pixel in a brand new image. This means that any extreme values, such as for the wandering hot pixels, are not merely watered down by averaging but are completely ignored. It is also a good way of eliminating noise.

The second step (medianising) is quite slow to process because there is not likely to be enough memory in the computer to hold the whole sequence of images at once. Instead the process divides the images up into slices, according to the image size and the amount of memory available. The image files therefore have to be loaded multiple times. Time for coffee (or probably a leisurely lunch)!

Standard stacking is much faster because it is only necessary to hold images from the sequence one at a time. Two passes are used though: once to find the star positions, then after analysing matching patterns a second pass does the alignment.

Next page