Image Calibration Guide
Astrophotography can be far more complex and challenging than most other types of photography. To get the most out of your camera you may need to calibrate your images, this can seem daunting at first so we’ve put together an article to explain some of the details involved in image calibration. It will cover what image calibration is, what it aims to do, how calibration frames achieve this and finally a few shortcuts you can use to make life easier.
Image Calibration is the term that’s generally used to describe the use of dark, bias and flat fields to calibrate an image. Typically we would like all the information in an astro image to come from the object we are taking a picture of, sadly that’s not the case. We find that the images contain hot pixels, gradients, dust doughnuts or vignetting. All this distracts from the object you are imaging and can leave an image looking unclear and rushed. It’s the job of the calibration process to remove the unwanted signal, leaving just the astro image. Typically you will take a number of calibration frames before, or after, your imaging session. These will then be used in post-processing to perfect your image.
A typical image calibration can be expressed as:

It looks complicated so let’s take each frame one at a time.
Bias Frames:
A Bias frame is a short dark exposure, typically 1mS taken with a cap on the camera or the shutter closed to keep out light. The information in this frame includes variations in signal across an image due to voltage fluctuations during the read out process, gradients down an image due to signal building up during the readout process and amp glow. The important property of the bias frame is that it contains signal related to the readout process, this signal will be present on all images.
Dark Frames:
A Dark Frame looks to capture information on the thermal signal. During a long exposure electrons will accumulate in the pixels of the sensor at a slow rate, independent of the light falling on the sensor. The amount of this signal depends on how “hot” a pixel is, the exposure duration and the temperature of the sensor. Due to small defects in the silicon the sensor is made from some pixels pick up thermal signal quicker than others. Those saturating in an exposure are described as hot, those not saturated but with significantly more signal than average are described as warm. However all pixels will contain slightly different levels of dark signal. Whatever the level is the value scales linearly with exposure time and exponentially with increasing temperature.
To calculate and correct this dark signal we take a dark frame. However this will have a bias signal, as do all images. So the first thing to do is subtract the bias frame from it to get the dark current.

We can then scale this to the dark current in the image frame. If the dark is taken at the same exposure duration and temperature as the image then the factor is 1. If the dark frame is half the duration of the image then the factor is 2. If the sensor had a dark current doubling temperature of 8 degrees and the Dark Frame was taken 8 degrees cooler than the image the factor is multiplied by 2.
Flat fields:
Flat fields are images of an evenly illuminated target for example a flat panel light (an evenly lit white t-shirt also works well). Under such illuminations you might expect every pixel to have the same value, this is not true. Deviations from this occur on different scales. Pixel to pixel variations are due to the defects in the silicon that the sensor is made from. If you can see visible grids this can be due to the placement of the micro lenses on the surface of the sensor. Dust doughnuts and dust spots are due to dust at different distances from the sensor. Brighter illumination at the centre of an image and a fall off at the corners is due to vignetting of the telescope optics. All this can be corrected by a good quality flat. As the flat contains both bias signal and thermal signal it is corrected for both. Then the image frame is divided by the flat field.
Using this method we can subtract most of the unwanted signal from an image leaving just the signal from the object that is being photographed.
The need for repetition.
Similarly to how taking one exposure of a deep sky object isn’t enough, taking one of each calibration image isn’t either. The three calibration images, flat, dark and bias all contain noise. It’s important that this noise is minimised or it will be added to the image during calibration and reduce the overall clarity of your image.
The way this is done is to take a large number of calibration files and average them to make a master dark, master flat and master bias. You should aim to have more sub exposures in your calibration files than sub exposures in your image.
When to calibrate your image:
The rule of thumb is calibrate your individual sub exposures before alignment and stacking. It should be the first step in your image processing routine.
Short cuts:
The full image calibration process is long winded, complicated, and prone to introducing new artefacts. Here’s a few shortcuts if you’re ever pressed for time or just don’t want to go through the full process.
Sometimes you can get away without bias frames:
Take your dark frames at the same temperature and with exposure duration as your image frames. The dark frame will then have the same dark current and hot pixels as your image frame. Even better the dark frame also contains the bias frame signal. This means you can calibrate your image just by subtracting the dark frame and using a flat field corrections, removing the need for bias frames. This works well on all our cameras.
Dither your images:
Dithering moves the image around the sensor. During stacking the image is aligned and small scale defects on the image such as hot pixels and pixel to pixel variation can be corrected or averaged out. This can remove the requirement for dark frames from cameras not requiring large scale corrections in their flats and darks. This is especially true for the Sony CCD cameras and also works pretty well for the Kodak sensors. The CMOS sensors benefit less due to amp glow which, while small, can still sometime be seen in a highly processed image.
Flat fields:
In the above example we corrected the flat field for both bias and dark signal. If the flat field has a short exposure and an average pixel value about half the saturation point (i.e. is bright compared with the dark frames and bias frame) then the dark and bias correction are not really needed. Also if your telescope optics are able to evenly illuminate your sensor and you clean your optics of dust then flat fields can be avoided all together.
Image calibration is a complex subject. If you are using a relatively small, high quality, Sony sensor and dither you might find you don’t need to calibrate if you dither your exposures. On the other hand if you are using a large format camera or a camera that utilises a CMOS sensor, especially with less expensive telescopes, you are going to need to do some calibration to get the best out of your images. The best advice we can give is practice makes perfect and if you ever need any help our dedicated support team are always here to help.