Changing Technology to Suit HDTV Broadcast

Lester J. Kozlowski
Rockwell Scientific
1049 Camino Dos Rios
Thousand Oaks, CA 91360


Several forces in the United States are pushing the transition from NTSC, which has been around for over 50 years, to high definition television (HDTV). Today’s most vocal proponent is the FCC, which is steadfast in its resolve that we must move forward. Paying advocates with growing passion are independent content producers who are confirming the quality of the media, reducing their production cost while enhancing creativity, and smartly embracing the knowledge that HD production will maximize the usable life of their intellectual property in the face of inevitable technological change. Reruns of I Love Lucy, for example, are still being broadcast because of their complete visual and comedic quality. On the verge of building an appetite if the price is right is the consumer, who has embraced the sparkling image quality of DVDs and digital still cameras with unprecedented enthusiasm. Less convinced is the broadcast community because of the lack of HDTV content, the perishable nature of existing productions, the large capital investment and several looming technological problems including need for widespread availability of affordable HD cameras with the necessary quality.


Nearly all broadcast cameras today use CCD sensors to produce video and emerging electronic cinema. Unfortunately, as broadcast technology expands its resolution from NTSC and PAL to HDTV at the necessary frame rates, the S/N ratio of CCD-based cameras degrades ~3 dB per octave. For example, broadcast camera S/N ratio under standard illumination of 2000 lux at f/8 is at least 62 dB for NTSC (700x520), but degrades to 54 dB for HDTV (1920x1080). Prototype UHDTV (~4000x2000) cameras, which are otherwise attractive for electronic cinema, have SNR <45 dB, which is on a par with the best VHS video.

Fortunately, two fiercely competitive factors are resulting in a superior alternative to CCDs, which have enjoyed nearly 20 years as the preeminent sensor technology for video. First, the CMOS integrated circuit technology used for memory and microprocessors is continuing to improve independent of the HDTV impasse. Second, a basic architectural difference relative to CCD sensors is enabling CMOS imaging systems-on-chip (i-SoC).

CMOS SoC technology is best represented by microprocessors that make today’s consumer PCs more powerful than the supercomputers used to put a man on the moon and help end the Cold War. While the first derivative SoC imaging sensors were inferior to CCDs, the emerging technology is now technically superior in many respects because deep submicron technology is widely available. Deep submicron refers to the lithographic dimension where the smallest electronic circuit feature is less than 0.25 microns. The nascent submicron CMOS devices for HDTV and UHDTV have hence become digital sensors offering lower noise and better resolution while using ~10X lower power to directly generate 12-bit video. Consequently, it’s now possible for Electronic News Gathering (ENG) cameras to finally migrate to the compact form factor of consumer camcorders by embracing CMOS.

While NTSC television has held on for over 50 years and CCDs have ruled as the best imaging sensors for about 20 years, it is thus a likely point in time for HDTV to move forward by exploiting the emerging CMOS technology.


CCD technology is mature with respect to yield and performance. Both benchmarks are either at theoretical limits or practical levels that have not changed significantly for several years. While their noise can be extremely low for applications such as astronomy where the image is read at a leisurely rate, generating high-speed video results in higher noise and a concomitant degradation in S/N ratio.

Assembled in Figure 1 are CCD noise in volts and electrons and camera S/N ratio data from CCD and camera manufacturer datasheets. These many CCD-based cameras and sensors support applications encompassing astronomy through video in its many forms. Also shown for convenient reference is the approximate camera S/N ratio, including the well-known data points for standard definition video and HDTV.

Figure 1. CCD noise in volts and electrons, and SNR versus video frequency.

The figure clearly shows the practical lower limit for CCD noise due to the composite white noise of the sensor and the camera electronics. The trend also graphically reflects the typical 3 dB increase in CCD noise per octave in video frequency as has been reported.[1] Of utmost importance for camera manufacturers and broadcast studio engineers is, consequently, the associated upper limit on the S/N ratio. This upper limit, which is depicted for all practical purposes by the trend line labeled “empirical white noise limit”, is increasingly unattractive as sensors move well into the megapixel regime for video applications.

Figure 2. Architecture of representative CMOS imaging System-on-Chip (SoC).

At lower frequencies, CCD noise and SNR varies over several orders of magnitude due to the variation in flicker noise for the different sensors, camera designs and semiconductor fabrication processes.


CMOS-based imagers for both infrared and visible applications now use pixel-based amplifiers in the imaging system-on-chip architecture shown in Figure 2. The pixel-based amplification can appropriately set the signal bandwidth so that there is no longer need for external 30 to 33 MHz filtering for HD cameras.

This factor reduces the relevant noise bandwidth from tens of MHz for CCDs to tens of kHz for CMOS imaging SoCs. This significant advantage holds even for HDTV and UHDTV cameras, so that the dominant noise can instead be set by the reset noise of the specific pixel design. Conversely, the read noise of a large format CCD is often dominated by the output amplifier’s thermal noise. This is especially true after CCD video bandwidth is best doubled to perform correlated double sampling without compromising CCD MTF. Instead, however, the analog video bandwidth is usually constrained to minimize the noise at the expense of resolution.

Designers of CMOS-based sensors reduce noise while eliminating the classic noise vs. resolution trade because the pixel-based amplifier’s bandwidth better matches the imager sampling frequency. The CMOS output buffer’s noise is usually negligible.

In practice, therefore, CMOS can circumvent the “3 dB per octave” increase in noise that is experienced with CCD sensors and the associated degradation in camera S/N. Several CMOS manufacturers have now shown this basic trait. Included in Figure 3, for example, are noise measurements at frequencies as high as 75 MHz with Rockwell Scientific’s ProCam-HD sensor wherein the read noise is <25 e- so that the camera S/N ratio can be boosted well above 54 dB.[2] Recently, Takayanagi’s [3] 8.3M pixel CMOS sensor for UHDTV offers 10-bit performance at the composite data rate of ~ 500 MHz.

Figure 3. CCD and CMOS sensor noise in volts and electrons, and SNR versus video frequency.


Today’s broadcast cameras offer the camera operator the choice of changing the video bandwidth to trade lower noise vs. higher resolution. This option is not necessary with CMOS-based cameras because the noise-setting bandwidth is at the pixel amplifier rather than the video output amplifier. So, besides the potential for lower noise and higher S/N ratio, broadcast cameras with CMOS sensors can have full Nyquist-limited resolution.

Thus, for the same architectural reasons that enable lower read noise at high video rates, higher square wave modulation transfer function is achieved with CMOS-based imaging sensors than competing CCDs. Figure 4 directly compares actual video waveforms from the green channels of CCD and CMOS cameras for several pixels of video and one line of video. In the case of the CMOS sensor, a square wave pattern with continuously variable frequency was used to produce the line video. Since the CMOS sensor has digital output, the analog video is reconstructed data using a high-speed video DAC.

Figure 4. Analog video waveforms for HD-CCD and HD-CMOS sensors at 1080/30p

While the modulation for the CCD at 1000 lines is seen to be nearly zero due to the 33 MHz filter used to reduce white noise, the ProCam-HD CMOS sensor maintains full resolution. Further, while the former’s read noise is up to 4 times higher than the latter, the CCD rise and fall times are unsettled and about 3 times slower than the DAC’s 1.5 ns response time.

The square wave MTF’s for the two 1920x1080 sensors are compared in Figure 5. The CCD data is for the green channel of a 3-CCD camera and the CMOS data include the blue, green, and red channels. The blue channel MTF at the Nyquist frequency is only slightly below the Nyquist limited value of ~ 70%. Even the red channel from the CMOS sensor, which is degraded by the MTF of the test optics, is superior to the critical green channel of the CCD.

Figure 5. Measured MTF for HD-CCD and HD-CMOS sensors


Perhaps the highest amount of the criticism aimed at CMOS sensors is directed equally toward sensitivity and fixed pattern noise. The latter is an issue that is being handled by the on-chip signal processing capability, both analog and digital. The former can be more difficult to mitigate since manufacturers of CCD-based cameras have cleverly enhanced their sensitivity beyond the limit constrained by each pixel’s area by further compromising resolution. Since CCD MTF is not Nyquist-limited for HD video, CCD manufacturers offer an operating mode where two adjacent video lines are co-added to nearly double sensitivity. While this degrades vertical resolution, concerns for maximizing sensitivity and S/N ratio usually take precedence.

The sensitivity of today’s best CMOS sensors matches the sensitivity of the comparable CCD devices since the amount of light collected at each pixel is now roughly equal. Specifically, when deep submicron CMOS processes are used to design and fabricate the sensor, the photon collection efficiency of a CMOS pixel supporting progressive image formation is significantly higher than for a progressive CCD at the 5 micron pixel pitch that produces 2/3 inch HDTV sensors.

In the absence of microlens technology, both Frame Interline Transfer and Frame Transfer CCDs can collect more light than today’s CMOS sensor. The former, however, supports only interlaced image formation with a full field time delay to further degrade resolution. The latter are still being developed to support HD video and pose challenges with respect to vertical smear, shading and cost.


Other factors must be considered as broadcast camera technology migrates from NTSC to ATSC. These should include the following:

§         Reduce camera cost to enable quick acceptance and proliferation

§         Lower camera power to reduce heat and enhance battery life

§         Reduce camera size to enable proliferation and improve usability

§         Eliminate vertical smear

§         Provide digital video to enhance creativity, ease of use and facilitate cinema “look” when desired

§         Provide full performance support for 1080/60p to enhance content creation at highest quality to minimize generation of artifacts for broadcasting 1080/60i and 1080/30p


The collective evidence now available from several manufacturers strongly suggests that CMOS sensor technology best addresses both these factors and the highly technical issues previously discussed. CMOS imaging system-on-chip technology thus can truly help migrate broadcast technology to the next level.


CMOS sensor technology has already enabled the development of visible focal plane arrays with ultra-low read noise and high sensitivity for the HDTV market. The successful integration of silicon photodetectors with low-noise pixel-based amplifiers in fine pixel pitch via state-of-the-art CMOS fabrication technology now not only suggests, but is demonstrating the imminent obsolescence of CCDs in high-quality HD broadcast applications.

The charge-couple device (CCD) has been the preferred visible image-capture sensor technology in a variety of applications from consumer digital cameras to expensive scientific instruments primarily due to its relative low-noise operation. However, the CMOS-based paradigm today offers fundamental performance advantages including optimum bandwidth and higher sensitivity. CMOS image sensors available today offer the lowest noise, lowest power consumption, 12-bit ADC performance of any HD sensor on the market. These sensors contain photodetectors optimized for low dark current, high quantum efficiency and high uniformity that operate at high HDTV data rates with lower noise than any CCD alternative.

The advantages of CMOS sensor technology have arrived and offer the HDTV broadcast and Professional Video markets the most advanced performance available.


1.        K. Mitani, M. Sugawara and F. Okano, “Experimental Ultrahigh-Definition Color Camera System with Three 8M pixel CCDs,” SMPTE Journal, April 2002.

2.        M. Loose, L.J. Kozlowski, A.M. Joshi, A. Kononenko, S. Xue, J. Lin, J. Luo, I. Ovsiannikov, J. Clarke and T. Paredes, “2/3-inch CMOS Imaging Sensor for High Definition Television,” 2001 IEEE Workshop on CMOS and CCD Imaging Sensors, June 2001.

3.        I. Takayanagi, M. Shirakawa, S. Iversen, J. Moholt, J. Nakamura and E. Fossum, “A 1 ¼ inch 8.3M Pixel Digital Output CMOS APS for UDTV Application,” ISSCC 2003, San Francisco, CA (2003)