The Differences between CCD and CMOS Sensors

By Jo on

A few weeks ago we announced that we’re working on a new cooled CMOS camera for astrophotography. While we continue working towards a release date, we thought it a good time to start taking a look at some of the differences between CCD and CMOS sensors.

We’ll start with a more technical look at the differences between the two technologies and how they produce an image. We already have a guide on CCD sensors that provides an overview of how the sensors actually work. However, I’ll highlight and expand on a few points here.

CCD Sensors

The area on a CCD (Charge Coupled Device) sensor is divided into pixels using a series of channel stops and gates. While the sensor is exposing, photons that fall on a pixel are converted into electrons and stored as charge packets.

To readout the sensor after the exposure, we ‘clock’ the gates that form the pixels. This moves the charge packets down the image sensor and into the horizontal readout register. Once in this register, we can use a similar method of clocking to move the charge packets one by one to an amplifier that converts the numbers of electrons to a voltage.

Because this digital conversion happens outside the sensor, a high quality 16 bit analogue to digital converter (ADC) can be used. Every pixel is also being converted using the same amplifier which gives excellent pixel to pixel reproducibility, resulting in sensors with excellent linearity.

CCD sensors are often referred to as ‘dumb’ sensors and require a lot of external circuitry. However, this also means that other circuits on the CCD are adding very little signal to an image. This gives us great controllability as camera designers to optimise our cameras for low light imaging.

This kind of readout structure is also what allows us to dynamically modify pixel size using binning, adding flexibility to the astrophotography cameras.

However, all this clocking makes the sensors slow to readout. Traditionally, this isn’t a problem in astronomy where we prioritise quality over speed.

CCD sensors are also relatively expensive, particularly when you begin looking at large sensors, like the one in the Atik 16200.

CMOS Sensors

CMOS (or Complementary Metal-Oxide-Semiconductor) sensors, are often referred to as ‘systems on a chip’. There a number of different types of CMOS sensor, but for now we’ll concentrate on the type used in the Panasonic sensor we’re using in our new CMOS camera.

Each pixel is a discrete element hardwired to its own readout circuit. This means there is no true on-chip binning, although it can still be emulated in software to a certain extent. Typically, each column has its own ADC. These tend to be lower quality 12 bit ADCs compared to the high quality external 16 bit ADCs we can use with CCDs.

However, as each pixel can be read individually and simultaneously, and a large number of ADCs are used, this greatly increases the read speed of the sensor. It also means that the ADC and image sensor are on the same silicon die, which can give very low read noise. It is important to note, though, that read noise on a CMOS sensor is linked to full well depth, and using the sensor at its lowest read noise settings is usually at the expense of well depth. At the full well depths we tend to use for deep sky imaging, there’s actually little difference between CMOS and Sony CCD sensors like those in our 4-Series.

The classic analogy for full well depth is to think of each pixel like a bucket. The deeper your bucket, the more photons you can collect in it before it overflows.

Amp Glow

Having more circuits on the same die also causes the ‘amp’ glow that CMOS sensors are known for in astronomy. There are ways of controlling and minimising the effects of this, both on the sensor and through image calibration. We’ll take a closer look at amp glow at a future date. Another drawback is that using different circuits for each pixel can lead to slight variations in linearity and sensitivity between pixels.

The pixel size in CMOS sensors tends to be relatively small due to their mass market applications. For example, the pixels in the Panasonic sensor we’re using are 3.8μm. This makes the camera a great match for shorter focal length telescopes, which will also provide great fields of view. This isn’t exclusive to CMOS, with cameras like our Atik 490EX boasting similar pixel sizes. However, it does make the camera less flexible across a variety of setups, particularly as the pixels can’t be binned.

CMOS sensors are the preferred technology in a wide-range of consumer products, such as DSLR cameras and mobile phones. This means we benefit from the economy of scale that consumer markets create, and consequently, CMOS sensors tend to be much less expensive than their CCD counterparts.

The Pros and Cons – A Summary

CCD

+ Pros – Cons
  • Very little signal added by other circuits on CCD
  • Expensive
  • Binning to modify pixel size
  • Slow to readout
  • Pixel to pixel reproducibility
  • High quality ADC

 

CMOS

+ Pros – Cons
  • Cost
  • Amp glow
  • Read speed
  • 12 bit ADC can limit image quality
  • Low read noise at high gain settings
  • Variations in linearity and sensitivity between pixels
  • No on-chip binning

This is just an introductory overview of the key differences between the two sensor technologies, and the pros and cons this gives them. Overall, CMOS sensors offer a great alternative to CCDs when used with short focal length telescopes. But the real benefit is for astrophotographers looking for large multi-megapixel sensors at reasonable prices. As we move forward with our development, we’ll be putting out more information about how using a CMOS sensor affects astrohpotography in more practical terms.

Back to News

 

Comments