Jump to content
Q choi

Image Sensor CCD vs CMOS

Recommended Posts

Hi, all

 

I read the article published by Pelco in "Security Sales & Integration" magazine. According to this article, it says below

 

"during past five years much has changed with both technologies, and the current situation seems to favor the CMOS sensor"

 

"The main reason for the success of the CCD over the CMOS in the early statege of surveillance was the lower noise, less video smearing and blooming found in CMOS devices. However, with the increased performance of CMOS devices and the fact that it is turly a digital output and requires less power to operate, Cmos chip have become the choice for IP and megapixels camera."

 

Nose: moderate(CCD & CMOS)

Processing : sensor (CCD & CMOS)

Dynamic Range: wide (CCD & CMOS)

Sensor Output: analog (CCD) / digital (CMOS)

 

"A key difference between CCD and Cmos chips is the later outputs its information in a digital format. While CCD has dominated the video surveillance sector, IP and megapixel cameras are helping CMOS encroach on the market place"

 

Main advantage of CMOS

 

/much lower power / systems on a chip integration allows smaller cameras / lower cost of sensor chip and fewer components in camera / easy digital interface forfaster camera design / less image artifacts -no blooming or smear, with same sensitivity / higher dynami range for security applications / Direct addressing of pixels allows electronic ptz.

 

It seems like that CMOS is better than CCD in thesedays in terms of IP mega pixel camera? Does anybody have any ideas or information about those two? I found couple of documents from website. but it made me more confused about this. I thought CCD is always better than CMOS.

 

Any input would be appricated.

Share this post


Link to post
Share on other sites

CCD results in superior image quality? Because a digital camera has a CCD doesn't mean that the camera itself will produce a superb image.

 

The image quality produced by a digital camera is the result of the entire camera system including the optics, analog to digital conversion, image processing, image sensor, and all the other camera components and processes. Further, the way these components work together is an important factor in determining final image quality.

 

 

 

CCD (charge coupled device) and CMOS (complementary metal oxide semiconductor) image sensors are two different technologies for capturing images digitally. Each has unique strengths and weaknesses giving advantages in different applications. Neither is categorically superior to the other, although vendors selling only one technology have usually claimed otherwise. In the last five years much has changed with both technologies, and many projections regarding the demise or ascendence of either have been proved false. The current situation and outlook for both technologies is vibrant, but a new framework exists for considering the relative strengths and opportunities of CCD and CMOS imagers.

 

Both types of imagers convert light into electric charge and process it into electronic signals. In a CCD sensor, every pixel's charge is transferred through a very limited number of output nodes (often just one) to be converted to voltage, buffered, and sent off-chip as an analog signal. All of the pixel can be devoted to light capture, and the output's uniformity (a key factor in image quality) is high. In a CMOS sensor, each pixel has its own charge-to-voltage conversion, and the sensor often also includes amplifiers, noise-correction, and digitization circuits, so that the chip outputs digital bits. These other functions increase the design complexity and reduce the area available for light capture. With each pixel doing its own conversion, uniformity is lower. But the chip can be built to require less off-chip circuitry for basic operation.

 

 

 

CCDs and CMOS imagers were both invented in the late 1960s and 1970s

 

CCD became dominant, primarily because they gave far superior images with the fabrication technology available. CMOS image sensors required more uniformity and smaller features than silicon wafer foundries could deliver at the time. Not until the 1990s did lithography develop to the point that designers could begin making a case for CMOS imagers again. Renewed interest in CMOS was based on expectations of lowered power consumption, camera-on-a-chip integration, and lowered fabrication costs from the reuse of mainstream logic and memory device fabrication. While all of these benefits are possible in theory, achieving them in practice while simultaneously delivering high image quality has taken far more time, money, and process adaptation than original projections suggested.

 

Both CCDs and CMOS imagers can offer excellent imaging performance when designed properly. CCDs have traditionally provided the performance benchmarks in the photographic, scientific, and industrial applications that demand the highest image quality (as measured in quantum efficiency and noise) at the expense of system size. CMOS imagers offer more integration (more functions on the chip), lower power dissipation (at the chip level), and the possibility of smaller system size, but they have often required tradeoffs between image quality and device cost.

 

Today there is no clear line dividing the types of applications each can serve. CMOS designers have devoted intense effort to achieving high image quality, while CCD designers have lowered their power requirements and pixel sizes. As a result, you can find CCDs in low-cost low-power cellphone cameras and CMOS sensors in high-performance professional and industrial cameras, directly contradicting the early stereotypes. It is worth noting that the producers succeeding with "crossovers" have almost always been established players with years of deep experience in both technologies.

 

Costs are similar at the chip level. Early CMOS proponents claimed CMOS imagers would be much cheaper because they could be produced on the same high-volume wafer processing lines as mainstream logic or memory chips. This has not been the case. The accommodations required for good imaging perfomance have required CMOS designers to iteratively develop specialized, optimized, lower-volume mixed-signal fabrication processes--very much like those used for CCDs. Proving out these processes at successively smaller lithography nodes (0.35um, 0.25um, 0.18um...) has been slow and expensive; those with a captive foundry have an advantage because they can better maintain the attention of the process engineers.

 

CMOS cameras may require fewer components and less power, but they still generally require companion chips to optimize image quality, increasing cost and reducing the advantage they gain from lower power consumption. CCD devices are less complex than CMOS, so they cost less to design. CCD fabrication processes also tend to be more mature and optimized; in general, it will cost less (in both design and fabrication) to yield a CCD than a CMOS imager for a specific high-performance application. However, wafer size can be a dominating influence on device cost; the larger the wafer, the more devices it can yield, and the lower the cost per device. 200mm is fairly common for third-party CMOS foundries while third-party CCD foundries tend to offer 150mm. Captive foundries use 150mm, 200mm, and 300mm production for both CCD and CMOS.

 

The larger issue around pricing is sustainability. Since many CMOS start-ups pursued high-volume, commodity applications from a small base of business, they priced below costs to win business. For some, the risk paid off and their volumes provided enough margin for viability. But others had to raise their prices, while still others went out of business entirely. High-risk startups can be interesting to venture capitalists, but imager customers require long-term stability and support.

 

While cost advantages have been difficult to realize and on-chip integration has been slow to arrive, speed is one area where CMOS imagers can demonstrate considerable strength because of the relative ease of parallel output structures. This gives them great potential in industrial applications.

 

CCDs and CMOS will remain complementary. The choice continues to depend on the application and the vendor more than the technology.

 

 

 

CCDs are so named for the way they transfer charges between pixel wells, and ultimately out of the sensor. The charges are shifted from one horizontal row of pixels to the next horizontal row from top to bottom of the array. This is a parallel (or vertical) shift register architecture, with multiple vertical shift registers used to transport charges vertically down the rows. The charges are "coupled" to each other (thus the term charge-coupled device) so that as one row of charge is moved vertically, the next row of charge (which is coupled to it) shifts into the pixels thus vacated.

 

With the charges shifted down the parallel array row by row, you might wonder what happens to the charges in the last row of the sensor device. Using a serial shift register architecture, the last row is actually a horizontal shift register. Charges in that row serially transferred out of the sensor using the charge-coupling technique, making room for the next row to be shifted out, and the next, and so on. This serial transfer of charge out of the CCD is often described as a "bucket brigade," referring to its similarity to the old-fashioned fire department's bucket brigade.

 

Before being transferred out of the CCD serially, each pixel's charge is amplified resulting in an analog output signal of varying voltage. This signal is sent to a separate off-chip analog to digital converter (ADC) and the resultant digital data is converted into the bytes that comprise the raw representation of the image as captured by the sensor, prior to any post-processing. Unlike computer RAM that represents a 1 or 0 by either storing a charge or not, the charge on a CCD remains in analog form until the ADC stage late in the process.

 

Because the CCD transfers a pure electric charge over the entire sensor via the charge-coupling process with little resistance or interference from other electronic components, it tends to produce a cleaner, less noisy signal than CMOS sensors (which have much more circuitry than CCDs). The transfer, however, is never 100 percent efficient; some electrons will inevitably be lost somewhere between the pixel well and the sensor readout. A sensor's charge transfer efficiency (CTE) is a defining specification provided by manufacturers.

 

 

The Gatekeepers

 

Electrodes act as gatekeepers to the entire process. Electrodes are conductors that permit current to flow in or out of an electronic device and can act as electronic gates. They are also called by other names in CCDs, according to their function in the sensor design (i.e. transfer gates, exposure control gates, and overflow gates). In the case of transfer gates, the electrodes receive clock pulses of varying voltage that enable the transfer of charge from one pixel well to the next. This includes transfer of pixel charges from row to row down the array, and the final serial readout of the last row. The electronic shutter on a sensor involves using voltage controls and electrodes to limit the integration time (exactly how long a pixel will accept photons and generate electrons), performing an exposure control function. And overflow gates are used to keep electrons from spilling and contaminating adjacent pixel charges.

 

The most common electrodes are made of polysilicon, though Kodak has introduced another type of electrode made from indium tin oxide (ITO). This can improve the process of capturing electrons in the pixel wells, because ITO is optically more transparent than polysilicon. An unfortunate side effect of polysilicon electrodes is that they can reflect or absorb incoming photons at certain wavelengths.

 

 

 

CMOS (complementary metal oxide semiconductor) electrodes function differently than those on CCDs because of the inherent differences in the way the two kinds of sensors transfer the charge. In other words, CMOS doesn't use the CCD's charge-coupled transfer process. Therefore, CMOS doesn't use electrodes the way CCD does for that process. However, electrodes are used on CMOS to reduce noise and for transfer gates to the offload transistors.

 

CMOS APS

 

CMOS sensors were developed in the early 1980’s. Passive pixel sensor (PPS) image sensors were the first products in this family to come to market. The large feature sizes available in existing CMOS technology allowed only a single transistor and three interconnecting lines for each pixel. The speed and signal-to-noise-ratio of PPS was significantly lower than that of CCD sensors.

 

 

In the 1990s, APS technology added an amplifier to each pixel. This increased sensor speed and improved the signal-to-noise-ratio, providing a big advantage over PPS sensors.

 

When deep sub-micron CMOS technologies and micro-lenses appeared, APS became the alternative sensor technology. Its low power consumption and near-standard manufacturing process made it a competitor to CCD sensors for certain applications.

However, APS technology has inherent problems. Due to process variations that create nonuniformities in the column level ADCs and in-pixel amplifiers, large fixed pattern noise (FPN) at high resolutions typically yields limited sensitivity, less than is required for many applications, including security and the film industry. Human eyes are particularly sensitive to image edges, and the column-level ADCs amplify this noise.

 

 

 

 

As mentioned, a key function of the electrodes is to act as transfer gates to control the charge transfer in CCDs. To delve a bit deeper in understanding how this process works, let's look at a "four-phase CCD," which has four electrodes per pixel. (Most CCDs are multi-phase devices and the number of the phases/electrodes varies by sensor model.)

 

The first phase of each pixel has the same voltage applied, as do the second, third, and forth phases. If an electrode receives a high voltage, a potential well is formed beneath the electrode in the silicon substrate, and if it receives a low voltage, a potential barrier is formed, which helps keep the captured electrons (the pixel data) in the potential well. Then by varying the voltages applied to adjacent electrodes in a properly timed sequence, the potential wells can actually be shuttled across the pixel and ultimately into the next pixel, enabling the bucket brigade effect as described above.

 

Simple but Complex

The four-phase operation is a simple process, though a bit complex to describe in words. We'll try here.

 

The process starts by first turning off phase one and phase two electrode (gate) voltages in the first clock period, while turning on phase three and phase four electrode voltages in that period. During the second clock period, phase one is turned on and phase three is turned off. Then phase two is turned on and phase four is turned off in the third clock period. Finally phase three is turned on and phase one of the next pixel is turned off during the fourth clock period. This process is repeated to move the charge along the sensor.

 

Four-phase CCD technology is a popular sensor architecture because it can be created using two layers of material. In addition, according to Philips which uses a four-phase design, it allows for at least 50 percent of the pixel well for storage and also offers the highest charge capacity among competitive designs. A three-phase CCD provides only 33 percent of the pixel well for storage

 

 

 

DPS Image Capture and Processing

 

DPS technology converts the quantity of light striking each picture element (pixel) to a digital value at the earliest possible point: at the pixel itself. An analog-to-digital converter (ADC) is designed into each pixel, and is operated simultaneously with all other ADCs in every pixel of the sensor. This pixel-level ADC architecture permits the use of many highly parallel low-speed circuits, operating close to where the photodiode signals are generated. This is key to optimizing the signal-to noise ratio (SNR) for each pixel.

The DPS system uses the individual ADCs in each pixel to perform non destructive correlated double sampling (CDS) at each pixel. DPS uses this capability to sample the growing light intensity at each pixel many times during each image capture period. This allows exposure level of each pixel to be determined by the rate of change of charge collected rather than only its absolute magnitude. Each pixel is also provided with an adjustable offset cancellation gain amplifier to assure uniform response throughout the sensor array.

These innovations greatly reduce noticeable fixed pattern noise problems commonly associated with the column-level ADC used on APS sensors.

 

Because DPS sensors are digital, pixel readout is much faster and more accurate. Each sample of the digital image is captured in on-chip RAM. The high bandwidth provided by tightly coupled local memory is used to achieve its superior high dynamic range. This approach is not practical for CCD or APS sensors because of their reliance on analog readout circuitry. This is not a problem with DPS, which greatly benefits from the digital sampling performed on each pixel.

 

 

Dynamic range is the ratio of the brightest image that can be captured by the imaging system to the darkest image that can be captured. Light intensity greater than the brightest possible image will cause the sensor to saturate, while light intensity less than the darkest possible image will not register on the sensor. Both of these conditions distort the image, hiding potentially vital information that lies outside the dynamic range of the sensor.

When an exposure begins, each pixel is charged at a rate that is proportional to the intensity of the light that strikes it. A stronger light source will charge a pixel more quickly than a weaker light source. Existing technology typically uses a single exposure time for all pixels. At the end of the exposure, the camera will sense the total charge accumulated in each pixel. But that means some pixels (the brighter ones) may be overexposed while others (the darker ones) may be underexposed. DPS overcomes this limitation as follows: with DPS, the light striking each pixel is sampled multiple times during the exposure period. DPS analyzes how quickly each pixel is being charged by the light striking it. This way, DPS measures light intensity by a combination of the rate at which the charge grows as well as the total charge accumulated during an entire exposure.

 

Specifically, the DPS system records the length of time required to nearly saturate each pixel. Pixels exposed to bright illumination will tend to saturate more quickly than other pixels. DPS determines for each pixel whether it will saturate before the next sample. If a pixel would saturate, then its elapsed exposure time is stored in memory, together with its current intensity of charge.

 

The advantage of this approach can be appreciated when one realizes that the entire range of each individual pixel, as well as the rate of change of the pixel charge, is used to form the resulting image, significantly increasing the dynamic range that is captured. Other technologies only measure the pixel value, not its rate of change.

DPS also provides improved color performance not available with other sensor technologies: the data recorded by each pixel is of very high quality, both in terms of accuracy and precision. High data quality allows the DPS image processing algorithms to render excellent fidelity for all colors and intensities.

DPS provides a fast global electronic shutter to capture bright lights and produce images that do not exhibit rolling shutter artifacts common in APS sensors.

 

 

 

Exposure Control System

Exposure Control is divided into three blocks. The Scene Analysis block is similar to the photographer’s light meter, returning information on the amount and quality of light in the scene. The Settings Table interprets the scene information to adjust it for optimal viewing. The Transition Control block keeps the video image smooth. These blocks can be tested with known scenes for accuracy.

 

Scene analysis results in three estimates:

• Light Level Estimate

• White Balance Estimate

• Scene Range Estimate

Calculation of these estimates is discussed in greater length below. The Settings Table block is a collection of functions and values that calculate the best image system settings for each result from the scene analysis block. This is akin to the photographer reading the exposure time and f-number from the light meter or a table.

These settings can be verified independently by manually selecting the scene values and observing or measuring the quality of the video output. Because it is a video system, transitions between settings must be smooth and not oscillate. This is achieved by the Transition Control block, which acts as the steady hand of a camera operator, smoothly changing settings while providing a pleasurable picture. The overall operation of this system and specific detail on the scene analysis is reviewed in more detail in the following sections.

 

 

http://scorpiontheater.com/camlab.aspx

 

http://www.pixim.com/assets/files/product_and_tech/Pixim_Technology_White_Paper.pdf

 

http://www.sony.net/Products/SC-HP/cx_news/vol52/pdf/featuring52.pdf

 

http://www.dalsa.com/corp/markets/CCD_vs_CMOS.aspx

 

CCD

http://en.wikipedia.org/wiki/Charge-coupled_device

 

CMOS ACTIVE PIXEL SENSOR

http://en.wikipedia.org/wiki/Active_pixel_sensor

 

http://www.rfconcepts.co.uk/cxd2463r.pdf

 

http://en.wikipedia.org/wiki/Hole_Accumulation_Diode

Edited by Guest

Share this post


Link to post
Share on other sites

Wtih a CCD sensor, every individual pixel’s charge is transferred through an output node, which is then converted into an electrical signal. The signal is then buffered and sent as an analog signal. Because the pixels are completely devoted to light capture, the image quality is usually pretty high.

 

With a CMOS sensor, every individual pixel performs its own charge-to-voltage conversion, and the sensor also performs amplification and noise-correction. The sensor also includes digitization circuits which allow the chip to output information in a digital format. Because of the complexity of this design, the area devoted to light capture is reduced. And because each pixel must perform its own conversion, uniformity (thus image quality) is lower.

 

The production cost for both types of sensors are similar, but CMOS sensors sometimes require additional support chips to optimize image quality. CMOS sensors are great for devices that require speed and low power consumption, while CCD sensors excel in image quality and low light performance.

 

While both CMOS and CCD sensors have matured a great deal in recent years, when it comes to security cameras, I prefer CCD for outdoor applications and CMOS for indoor IP camera functions. Let me explain why.

 

In the still camera world, CCD and CMOS are almost at par in terms of performance. In fact, CMOS sensors are attractive in this space because of their much lower power requirements - an important feature for devices which run on batteries. In the security camera world, I find that CCD sensors outperform their CMOS counterparts in several key metrics.

 

The first and most important is light sensitivity. If you need a security camera with superior night vision capabilities, CCD is the way to go. The CMOS cameras I have tested, with or without IR illumination, were almost completely blind in the dark. Cameras with CCD sensors are very capable in low light applications, and are terrific when coupled with infrared illumination. If you need to record in low light situations, CCD is the way to go – do not even consider CMOS.

 

The second factor to consider is image quality. In the security camera world, while both technologies are getting closer, CCD still has the edge when it comes to image quality. This is because CCD sensors exhibit less image noise than their CMOS counterparts.

 

So why would one consider purchasing a security camera with a CMOS sensor? Mainly because these sensors are more prevalent in IP Security Cameras - that is, cameras that are equipped with built in web servers and communicate using the IP protocol over CAT5 cabling. These cameras tend to be very affordable and are great for indoor applications that require web based video streaming.

 

Andy

Share this post


Link to post
Share on other sites

I used to hate CMOS but my D-SLR has one and the Mobitix cameras changed my mind.

 

The Mobitix can stare at the sun all day and still grab a plate with no damge

they can do some amazing stuff.

 

They can de-fog a situation and see though to the other side.

 

CMOS is catching up !

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×