Select Page

Fiber Optic Transmission Systems – Part 12

Published on Oct 22, 2014 | News

Download Complete Fiber Optics User’s Guide

Digital Modulation

The digital bit is the basic unit of digital information. This unit has two values: one or zero. The bit repre- sents the electronic equivalent of the circuit being on or off where a zero equals off and a one equals on. One bit of information is limited to these two values. The digital information is transmitted through the fiber serially one bit at a time.

A digital pulse train represents the ones and zeros of digital information. The pulse train can also depict high and low electrical voltage levels or the presence and absence of a voltage.

Digital in the Television and Video Industries

A digital signal can mean different things to video and cable TV system engineers, causing much confusion. The most common types of digital video and audio are:

• Uncompressed digital video and audio

• Lossless compression of digital video and audio

• Lossy compression of digital video and audio

• Complex digital modulations schemes such as 64 QAM, 256 QAM, 16 VSB, 64 QPSK, etc.

• SONET, ATM, or other telecom base standards

• Serial digital interface (SDI)

• High-definition serial digital interface (HD-SDI)

• Digital audio or AES/EBU

The process of digitizing a standard NTSC video signal is straightforward. The typical bandwidth of a video signal is 4.5 MHz. Typically a sample rate of four times the video bandwidth is used or about 18 mega- samples per second. The analog-to-digital (A/D) converter typically has a sampling resolution of 8, 10, or 12 bits.

This process generates a serial digital data stream of about 144–270 Mbps. The video signal is typically encoded in a digital format at the video source or in the video camera. Depending on the digital video format, the analog video will be encoded in one of several standard formats such as 4:2:2, 4:1:1, or 4:2:0. While these encoding schemes are not referred to as compression, they omit or remove certain information to reduce the systems bandwidth requirement. In the encoding schemes above, the three digits refer to the three common components of video. The first component is luminance (Y) or the light intensity of the video signal. The second is the color signal of red minus luminance (R–Y). The third component is the color signal of blue minus luminance (B–Y). These three components are referred to as YUV. The numbers 4:2:2 have to do with the fact that twice the bandwidth is used for the Y channel than the two color channels. This technique is a form of compression that will be addressed later in the chapter. HDTV or high-definition video requires a data rate of 1.485 Gbps for one uncompressed signal.

The most efficient means of analog video transport utilizes analog to digital conversion. Once video and audio signals are converted to digital information, many channels can be combined into one high-speed data stream using TDM. The high-speed serial digital data stream is then converted to light via a laser or LED.

The receiver unit performs the reverse function. The light or optical signal is received by a PIN photo detector. The optical signal is converted back into a serial data stream. The data stream is demultiplexed using TDM. The digital data is then converted back to video and audio via digital-to-analog (D/A) converters.

Digital video transmission has many advantages over analog transmission. An analog fiber-optic system requires high-linearity optical components that are expensive and require fine tuning and complex calibration procedures. Once a video or audio signal has been digitized, it can be transported via fiber using readily available digital telecom optical components for both multimode and single-mode applications. A digital system has a higher immunity to noise and superior performance characteristics compared to an analog system. A digital signal can be regenerated and repeated virtually indefinitely without signal or performance degradation.

Compressed Digital Video

When compression is introduced into a video trans- port system, a substantial reduction in bandwidth can be implemented. A digital composite signal requires

144 Mbps and an HD-SDI signal requires 1.485 Gbps. When considering a system that will transport many channels of digital video, an enormous amount of bandwidth is required. A compression system removes redundant or repetitive information from the digital data stream. A compression or transmission encoding scheme will take advantage of limitations in the human eye. The human eye has lower sensitivity or resolution to color detail. Many compression or encoding schemes take this into account and compress or omit certain color details.

There are two basic types of compression systems: lossless and lossy. A lossless compression system does not degrade the video or audio quality. The receiver unit recovers the original uncompressed information. A lossless compression system strictly removes repeti- tive information from the data stream. Most video content has repetitive information from one video frame to the next. For example, the background image may not change from frame to frame. Therefore, there is no need to send this information repetitive times. Unfortunately, a lossless compression scheme does not offer significant bandwidth savings. A compression rate of three to four times can be expected.

A lossy compression scheme can achieve very high levels of compression but at the cost of image or signal quality. A lossy compression algorithm removes detail from the original image. Once the information has been removed, it cannot be reconstructed. There are many compression and encoding schemes used in video transport. The 4:2:2, 4:1:1, and 4:1:0 encoding schemes mentioned earlier are a technique used to reduce bandwidth. Since the human eye has less sensitivity or less resolution for color, these encoding schemes have less bandwidth for the color information. The human eye has higher resolution horizon- tally than vertically. When taking this into account, most video formats have a higher horizontal resolution than vertical resolution.

FIGURE 6.10-12: 16-QAM encoding phase constellation.

QAM Digital Encoding

Quadrature amplitude modulation (QAM) is a widely used modulation technique for video transport applications, particularly in digital cable TV systems. In a serial digital modulation scheme there are only two informational states: 1 and 0, or on and off. With 256- QAM there are 256 states. The information is encoded by a varying 360 degree quadrature phase and amplitude. This modulation scheme can provide an enor- mous amount of data throughput in a limited amount of bandwidth, but a higher signal-to-noise band ratio is required. Figure 6.10-12 is the phase constellation for a 16-QAM signal.

Download Complete Fiber Optics User’s Guide

Continue Reading