The document discusses convolutional error control coding, detailing its structure, representations, and the Viterbi algorithm for optimum decoding. It compares various error control techniques, including Automatic Repeat Request (ARQ) and Forward Error Correction (FEC), and illustrates the effectiveness of convolutional codes over block codes. Additionally, it includes MATLAB simulation analyses for bit error rates and decision-making trade-offs.
MEH607 Error ControlCoding KOCAELI UNIVERSITY Graduate School of Natural and Applied Sciences Prepared By: Mohammed ABUIBAID Email: m.a.abuibaid@gmail.com Submitted to: Dr. SITKI ÖZTÜRK Electronic and Communication Engineering Convolutional Error Control Coding AcademicYear 2015/2016
2.
Agenda 1. Introduction toConvolutional Codes 2. Convolutional Encoder Structure 3. Convolutional Encoder Representation (Vector, Polynomial, State Diagram and Trellis Representations ) 4. Maximum Likelihood Decoder 5. Viterbi Algorithm 6. MATLAB Simulation Hard and Soft Decisions Bit Error Rate Tradeoff Consumed Time Tradeoff
3.
Error Control Techniques (ARQ) AutomaticRepeat reQuest The receiver sends a feedback to the transmitter: Error is detected (NACK: Not- Acknowledgement) in the received packet, then retransmit that data block if no errors detected (ACK: Acknowledgement), don’t resend. Uses extra/redundant bits merely for error detection. Full-duplex (two-way) connection between the Transmitter and the Receiver. Result: Constant reliability, but varying data rate throughput due to retransmit. (FEC) Forward Error Correction The transmitter’s encoder adds extra/redundant bits to a block of message data bits to form a Codeword The receiver can both detect errors and automatically correct errors incurred during transmission, without retransmission of the data. Simplex (one-way) connection between the Transmitter and the Receiver. Result: Varying reliability, but constant data rate throughput. (ARQ+FEC) Hybrid ARQ Full-duplex connection required between the Transmitter and the Receiver. Uses error detection and correction codes. In general: Wire-line communications (more reliable) adopts ARQ scheme Wireless communications (relatively less reliable) adopts FEC scheme
FEC Coding Techniques BlockCode (No Memory) It collects k bits in a buffer prior to processing There is no retention within the encoding system of information related to the previous samples points memoryless Each output Codeword of an (n, k) block code depends only on the current buffer Convolutional Code (Memory) Why Named Convolutional? Each bit in the output stream is not only dependent on the current bit, but also on those processed previously. The encoder acts on the serial bit stream as it enters the transmitter. The number of sample points collected prior to processing is far less than required for a block code. (delay through the encoder is less) Its performance is less sensitive to Signal-to- Noise Ratio variations than that of block codes. (preferred in situations of limited power)
6.
Convolutional Codes Convolutional codesoffer an approach to error control coding substantially different from that of block codes. A Convolutional Encoder: encodes the entire data stream, into a single codeword does not need to segment the data stream into blocks of fixed size is a machine with memory is specified by three parameters 𝑛, 𝑘, 𝐾 𝑜𝑟 (𝑘 / 𝑛, 𝐾) 𝑅 𝑐 = 𝑘 𝑛 is the coding rate, determining the number of data bits per coded bit. − In practice, usually k=1 is chosen. K is the constraint length of the encoder and the encoder has K-1 memory elements.
7.
Agenda 1. Introduction toConvolutional Codes 2. Convolutional Encoder Structure 3. Convolutional Encoder Representation (Vector, Polynomial, State Diagram and Trellis Representations ) 4. Maximum Likelihood Decoder 5. Viterbi Algorithm 6. MATLAB Simulation Hard and Soft Decisions Bit Error Rate Tradeoff Consumed Time Tradeoff
8.
Convolutional Encoder Structure(rate ½, K=3) 3 Shift-registers where: − The first one takes the incoming data bit − The rest, form the memory of the encoder
Effective Code Rate Initialize the memory before encoding the first bit (all zero) Clear out the memory after encoding the last bit (all zero) Hence, a tail of zero-bits is appended to data bits Effective code rate : L is the number of data bits and k=1 is assumed: 𝑅 𝑒𝑓𝑓 = 𝐿 𝑛(𝐿+𝐾−1) < 𝑅 𝑐 = 1 𝑛 𝑅 𝑒𝑓𝑓 = 3 10 < 𝑅 𝑐 = 1 2
11.
Agenda 1. Introduction toConvolutional Codes 2. Convolutional Encoder Structure 3. Convolutional Encoder Representation (Vector, Polynomial, State Diagram and Trellis Representations ) 4. Maximum Likelihood Decoder 5. Viterbi Algorithm 6. MATLAB Simulation Hard and Soft Decisions Bit Error Rate Tradeoff Consumed Time Tradeoff
12.
Encoder Vector Representation We define n binary vector with K elements (one vector for each modulo-2 adder) The i-th element in each vector, is “1” if the i-th stage in the shift register is connected to the corresponding modulo-2 adder, and “0” otherwise. Examples: k=1 1 1 1 1 0 1 𝑔1 = 1 1 1 𝑔2 = [1 0 1] 𝑔1 = 1 0 0 𝑔2 = 1 0 1 𝑔3 = [1 1 1] Generator matrix with 2 vectors Generator matrix with 3 vectors
13.
Encoder Polynomial Representation Define n generator polynomials, one for each modulo-2 adder. Each polynomial is of degree 𝐾𝑘 − 1 or less and describes the connection of the shift registers to the corresponding modulo- 2 adder. Example: k=1 The output sequence is found as follows: 1 1 1 1 0 1
Encoder State DiagramRepresentation In a Convolutional encoder, the state is represented by the content of the memory. Hence, there are 2 𝐾−1 states. A state diagram contains all the states and all possible transitions between them. Only two transitions initiating from a state Only two transitions ending up in a state
16.
Encoder Trellis Representation Trellis diagram is an extension of the state diagram that shows the passage of time. Example of a section of trellis for the rate ½ code
Agenda 1. Introduction toConvolutional Codes 2. Convolutional Encoder Structure 3. Convolutional Encoder Representation (Vector, Polynomial, State Diagram and Trellis Representations ) 4. Maximum Likelihood Decoder 5. Viterbi Algorithm 6. MATLAB Simulation Hard and Soft Decisions Bit Error Rate Tradeoff Consumed Time Tradeoff
20.
Optimum Decoding (MaximumLikelihood ) If the input sequence messages are equally likely, the optimum decoder which minimizes the probability of error is the Maximum Likelihood (ML) decoder. ML decoder, selects a codeword among all the possible codewords which maximizes the likelihood function where is the received sequence and is one of the possible codewords 𝟐 𝑳 Codeword s to search
21.
The Viterbi Algorithm The Viterbi algorithm performs Maximum Likelihood decoding. It finds a path through trellis with the largest metric (maximum correlation or minimum distance). − It processes the demodulator outputs in an iterative manner. − At each step in the trellis, it compares the metric of all paths entering each state, and keeps only the path with the largest metric, called the survivor, together with its metric. − It proceeds in the trellis by eliminating the least likely paths. It reduces the decoding complexity to 𝐿 2 𝐾−1
22.
The Viterbi Algorithm A.Do the following set up: For a data block of L bits, form the trellis. The trellis has L+K-1 sections or levels and starts at time 𝑡1 and ends up at time 𝑡 𝐿+𝐾 Label all the branches in the trellis with their corresponding branch metric. For each state in the trellis at the time 𝑡𝑖 which is denoted by 𝑆 𝑡𝑖 ∈ 1,2, … , 2 𝐾−1 , define a parameter Γ 𝑆 𝑡𝑖 , 𝑡𝑖 B. Then, do the following: 1.Set Γ 0, 𝑡1 = 0, and 𝒊 = 𝟐 2.At time 𝑡𝑖, compute the partial path metrics for all the paths entering each state 3.Set Γ 𝑆 𝑡𝑖 , 𝑡𝑖 equal to the best partial path metric entering each state at time 𝑡𝑖. 4.Keep the survivor path and delete the dead paths from the trellis. 5.If 𝑖 < 𝐿 + 𝐾, increase 𝑖 by 1 and return to step 2. C. Start at state zero at time 𝑡 𝐿+𝐾. Follow the surviving branches backwards through the trellis. The path thus defined is unique and correspond to the ML codeword.
Error Correcting CodeGain It is defined as the reduction of 𝑬 𝒃 𝑵 𝒐 (in dB) that is needed to obtain the same error rate. Example: For a BER of 10−6 𝑬 𝒃 𝑵 𝒐 𝑐 = 11 𝑑𝐵 is needed with coding 𝑬 𝒃 𝑵 𝒐 𝑢 = 13.77 𝑑𝐵 without the coding The Coding Gain 𝐺 = 13.77 − 11 = 2.77 𝑑𝐵 𝟏𝟎−𝟔 𝟏𝟎−𝟓 𝟏𝟏 𝟏𝟑. 𝟕𝟕
31.
Agenda 1. Introduction toConvolutional Codes 2. Convolutional Encoder Structure 3. Convolutional Encoder Representation (Vector, Polynomial, State Diagram and Trellis Representations ) 4. Maximum Likelihood Decoder 5. Viterbi Algorithm 6. MATLAB Simulation Hard and Soft Decisions Bit Error Rate Tradeoff Consumed Time Tradeoff
32.
MATLAB Simulation: BER,Time Performance Convolutional-Viterbi Codec Generator Vector : [111,101] Hard and Soft Decisions Block Codec Hamming (7,4) Hard and Soft Decisions Vs. Richard Hamming Andrew J. Viterbi
33.
Error Performance Trade-offs Trade-off1: Hamming Coding Hard Decision Vs. Soft Decision Trade-off 2: Convolutional Coding Hard Decision Vs. Soft Decision Trade-off 3: Block Coding Vs. Convolutional
34.
BER Performance HardDecision Vs. Soft Decision Trade-off 2: Viterbi deCodingTrade-off 1: Hamming deCoding
Time Performance Trade-offs Trade-off4: Block Coding Hard Decision Vs. Soft Decision Trade-off 5: Convolutional Coding Hard Decision Vs. Soft Decision Trade-off 6: Block Coding Vs. Convolutional
37.
Time Performance HardDecision Vs. Soft Decision Trade-off 5: Viterbi deCodingTrade-off 3: Hamming deCoding