MIMO CHANNEL CAPACITY Ochi Laboratory Nguyen Dang Khoa (D1) 1
Contents Introduction Review of information theory Fixed MIMO channel Fading MIMO channel Summary and Conclusions 2
1. Introduction The use of multiple antennas can provide gain due to Antenna gain More receive antenna more power is harvested Interference gain Interference nulling by beam-forming (array gain) Interference averaging (to zero) due to independent observation Diversity again against fading Receive diversity Transmit diversity Information theoretic model of MIMO channel is consider 3
MIMO channel model Assume transmit and receive antennae Called MIMO system Fading radio channels modeled as freq-flat: Fixed Time-varying Know both/either in the transmitter and/or receiver Perfect channel state information (CSI) A priori unknown 4
2. Review of information theory Information theory (IT) has its origins in analyzing the limits communication. Information theory answers two fundamental questions in communication theory: What is the ultimate data compression rate? Answer: entropy. What is the ultimate data transmission rate? Answer: channel capacity. 5
Basic concepts Assume a discrete valued random variable (RV) X with probability mass function p(x) The average information or entropy of RV X: = log = log ( ) Measure the expected uncertainty in RV X Approximately how much information we learn on average from one instance of the RV X How many bits are needed, on the average, to convey the information obtain in RV X The entropy of a binary variable as a function of the probability = ( ) Note: Because information is measured in bit, the logarithm function here is base-2 Function E(x) is the expected value of variable X 6
Basic concepts Joint entropy of RV s X and Y, = log (, ) Measuring how much uncertainty in the two RV X and Y taken together Mutual information: is the relative entropy between the joint distribution and product distribution: Conditional entropy of RV Y given X=x = = Measuring of how much uncertainty remains about the RV Y when know RV X Measuring the mutual independence of two RVs Chain rule:, = + Note: because information is measured in bit, the logarithm function here is base-2 7
Channel capacity Information theoretic model of a communication system Shannon proved that reliable (virtual error-free) communication is possible at rates C up to: = max ( ) ( ; ) The distribution ( )that C achieves the maximum is called the optimal input distribution 8
Gaussian channel Mutual Information Gaussian Channel Capacity (bit per transmission) - P = is the power constraint [J/symbol] - = is the noise variance 9
Gaussian band-limited channel -W W Assume noise power-spectral density is Common model for communication over a radio network or a telephone line Channel Capacity Noise power: 2 = Energy per sample of T second: = There are 2W samples per second W Example: Telephone channel, W=3.3KHz, if = = 40 = 10 = 43850 bit per second 10
Parallel Gaussian channels Capacity: = 1 2 log 1 + Optimal transmission: ~ 0,,,,, water-filling (do not mention in this presentation) 11
3. Fixed MIMO Gaussian channel Signal ( )is transmitted at time interval n from antenna ( = 1,2,,) Signal ( )is received at time interval n from antenna j( = 1,2,,) = h + ( ) Where h ( )is the complex channel gain with h ( ) = 1 12
Matrix formulation MIMO channel The signal received at all antennae = + (1) where: = ( ) ( ) = ( ) ( ) h ( ) h ( ) = h ( ) h ( ) = ( ) ( ) 13
Noise and power constraint The noise vector = ( ) ( ) With ( )~ (0, ) The transmitted signal satisfied the average power constraint: ( ) = ( ) Since the noise power is normalized to unity, we commonly refer to the power constraint P as the SNR 14
Singular value decomposition The MIMO model is a special case of parallel Gaussian channels For every, we can write as = Where, V are unitary matrices and is a diagonal matrix of the singular values of H 15
Equivalent channel model =, =, = Since U and V are unitary matrices the channel model = + (1) Equivalent channel model = + ( ) is diagonal matrix of size, we have decomposed the correlated parallel channels into independent parallel channels 16
Equivalent channel model Independent parallel Gaussian channels 17
Derivation of channel capacity The rank of matrix H is min, The number of positive singular values is rank(h) The capacity of MIMO AWGN channel: ( ) = log1 +, = 18
MIMO channel capacity for full-rank channel matrix No CSI at the transmitter (and full-rank H) = logdet + CSI at the transmitter (and full-rank H) = max logdet + Where Q is the covariance matrix of the input vector x satisfying the power constraint - In case of no CSI at the transmitter = 19
MIMO channel characteristics Number of antennae vs. capacity of the channel MIMO channel capacity vs. SNR Ref. form http://ars.els-cdn.com/content/image/1-s2.0-s0166531609001096- gr10.jpg Ref. form http://www.mathworks.com/matlabcentral/fx_files/30588/1/untitled.jpg 20
4. Fading MIMO channels The channels are usually assumed to be ergodic Fading is fast enough and gets all realizations so many time that The sample average equals the theoretical mean The sample covariance equals the theoretical covariance 21
Fading channel mode with perfect receiver CSI Assuming that the channel is memoryless (independent channel state for each transmission), the capacity equals the mean of the mutual information = logdet + 22
Non-ergodic channels The channels are not always ergodic: fading can be slow that it undergoes only some realizations. this random process becomes non-ergodic In no-ergodic channel, the channel capacity the average maximum mutual information to measure the capacity of this using channel: probability of outage for a given (capacity rate R versus outage ) 23
5. Summary and conclusion AWGA MIMO channels are an extension of parallel Gaussian channels Parallel channels: channels on different frequencies The linear capacity increase becomes natural = logdet + 24
Fading AWGN MIMO channel Ergodic channels: Channel experiences all its states several times No delay constraints and/or fast fading Capacity equals the average mutual information: = logdet + Capacity increases linearly with = Non-ergodic channels Capacity does not equal the average mutual information Capacity versus outage probability is applied to measure the non-ergodic channels capacity 25
Expected value The expected value (or expectation, mathematical expectation, EV, mean, the first moment) of random variable is the weighted average of all possible values that this random variable can take on. For example: Let X represent the outcome of a roll of a six-sided die. More specifically, X will be the number of pips showing on the top face of the die after the toss. The possible values for X are 1, 2, 3, 4, 5, 6, all equally likely (each having the probability of 1/6). The expectation of X is: (1) = 1 + 2 +3 + 4 + 5 + 6 = 3.5 (1) Wikipedia: http://en.wikipedia.org/wiki/expected_value 26
Shannon Channel Capacity The capacity of a channel is the maximum, asymptotic (in block length) error-free transmission rate that can be archived. The capacity of a MIMO channel is a complicated function of the channel conditions and transmit/receive processing constraints 27