Binary symmetric channel
5 stars based on
A binary symmetric channel or BSC is a common communications channel model used in coding theory and information theory. In this model, a transmitter wishes to send a bit a zero binary symmetric channel a oneand the receiver receives a bit.
It is assumed that the bit is usually transmitted correctly, but that it will be "flipped" with a small probability the "crossover probability". This channel is used frequently in information theory because it is one of the simplest channels to analyze. The BSC is a binary channel ; that is, it can transmit only one of two symbols usually called 0 and 1.
A non-binary channel would be capable of transmitting more than 2 symbols, possibly even an infinite number of choices. The transmission is not perfect, and occasionally the receiver gets the wrong bit. This channel is often binary symmetric channel by theorists because it is one binary symmetric channel the simplest noisy channels to analyze. Many problems in communication theory can be reduced to a BSC. Conversely, being able to transmit effectively over the BSC can give rise to solutions for more complicated channels.
The channel capacity of the binary symmetric channel is. And one knows, that the entropy of a binary variable is at maximum one, and reaches this only if its probability distribution is uniform.
Shannon's noisy coding theorem is general for all kinds of channels. We consider a special case of this theorem for a binary symmetric channel with an error probability p.
The decoding error probability is exponentially small. Proof of Theorem 1. First we describe the encoding function and decoding functions used in the theorem. We will use the probabilistic method binary symmetric channel prove this theorem. Shannon's theorem was one of the earliest applications binary symmetric channel this method. Consider an encoding function E: This kind of a decoding function is called a maximum likelihood decoding MLD function.
The proof runs as follows. We achieve this by eliminating half of the codewords from the code with the argument that the proof for the decoding error probability holds for at least half of the codewords. The latter method is called expurgation. This gives the total process the name random coding with expurgation.
A high level proof: That is to say, we need to estimate:. We can apply Chernoff bound to ensure the non occurrence of the first event.
By applying Chernoff bound we have. From the above analysis, we calculate the probability of the event that the decoded codeword plus the channel noise is not the same as the original binary symmetric channel sent. We shall introduce some symbols here. We get the last inequality by our analysis using the Chernoff bound above. Now taking expectation on both sides we have.
Since the above bound holds for each binary symmetric channel, we binary symmetric channel. This expurgation process completes the proof of Theorem 1. Formally the theorem states:. For a detailed proof of this theorem, the reader is asked to refer to the bibliography. The intuition behind the proof is however showing the number of errors to grow rapidly as the rate grows beyond the channel capacity. Very recently, a lot of work has been done and is also being done to design explicit error-correcting codes to achieve the capacities of several standard communication channels.
The motivation behind designing such codes is to relate the rate of the code with the fraction of errors which it can correct. In fact such codes are typically constructed to correct only a small fraction of errors with a high probability, but achieve a very good rate.
The first such code was due to George D. The code is a concatenated code by concatenating two different kinds of codes. We shall discuss the construction Forney's code for the Binary Binary symmetric channel Channel and analyze binary symmetric channel rate and decoding error probability briefly here. Various explicit codes for achieving the capacity of the binary erasure channel have also come up recently.
However, we would see that the construction of such a code cannot be done in polynomial time. Recently a few other codes have also been constructed for achieving the capacities. LDPC codes have been considered for this purpose for their faster decoding time. From Wikipedia, the free encyclopedia. This article includes a list of referencesbut its sources remain unclear because it has insufficient inline citations.
Please help to improve this article by introducing more precise citations. March Learn how and when to remove this template message.