The Degrees of Freedom of Compute-and-Forward Urs Niesen Jointly with Phil Whiting Bell Labs, Alcatel-Lucent
Problem Setting m 1 Encoder m 2 Encoder K transmitters, messages m 1,...,m K, power constraint P
Problem Setting z 1 m 1 Encoder h 2,1 h 1,1 h 1,2 m 2 Encoder h 2,2 z 2 K transmitters, messages m 1,...,m K, power constraint P Gaussian channel with constant gains H = {h l,k }
Problem Setting z 1 m 1 Encoder h 1,1 Decoder a 1 (m 1, m 2 ) h 2,1 h 1,2 m 2 Encoder h 2,2 Decoder a 2 (m 1, m 2 ) z 2 K transmitters, messages m 1,...,m K, power constraint P Gaussian channel with constant gains H = {h l,k } Decode any deterministic function a l of the messages m 1,...,m K
Problem Setting z 1 m 1 Encoder h 1,1 Decoder a 1 (m 1, m 2 ) m 1 h 2,1 h 1,2 m 2 Encoder h 2,2 Decoder a 2 (m 1, m 2 ) Inverter m 2 z 2 K transmitters, messages m 1,...,m K, power constraint P Gaussian channel with constant gains H = {h l,k } Decode any deterministic function a l of the messages m 1,...,m K Invert all computed functions a 1,...,a K to recover the messages m 1,...,m K
Problem Setting z 1 m 1 Encoder h 1,1 Decoder a 1 (m 1, m 2 ) m 1 h 2,1 h 1,2 m 2 Encoder h 2,2 Decoder a 2 (m 1, m 2 ) Inverter m 2 z 2 Computation capacity C(P, H, {a l }) for fixed function {a l }
Problem Setting z 1 m 1 Encoder h 1,1 Decoder a 1 (m 1, m 2 ) m 1 h 2,1 h 1,2 m 2 Encoder h 2,2 Decoder a 2 (m 1, m 2 ) Inverter m 2 z 2 Computation capacity C(P, H, {a l }) for fixed function {a l } C(P, H, A) for linear function {a l }
Problem Setting z 1 m 1 Encoder h 1,1 Decoder a 1 (m 1, m 2 ) m 1 h 2,1 h 1,2 m 2 Encoder h 2,2 Decoder a 2 (m 1, m 2 ) Inverter m 2 z 2 Computation capacity C(P, H, {a l }) for fixed function {a l } C(P, H, A) for linear function {a l } Computation capacity C(P, H) max {a l } C(P, H, {a l}) with maximization over all invertible functions {a l }
Computation Capacity Some Special Cases and Bounds z 1 m 1 Encoder h 1,1 Decoder a 1 (m 1, m 2 ) m 1 h 2,1 h 1,2 m 2 Encoder h 2,2 Decoder a 2 (m 1, m 2 ) Inverter m 2 z 2 Identity function A = I C(P, H, I) is the capacity of the K -user interference channel
Computation Capacity Some Special Cases and Bounds z 1 m 1 Encoder h 1,1 Decoder a 1 (m 1, m 2 ) m 1 h 2,1 h 1,2 m 2 Encoder h 2,2 Decoder a 2 (m 1, m 2 ) Inverter m 2 z 2 Identity function A = I C(P, H, I) is the capacity of the K -user interference channel C(P, H) C(P, H, I)
Computation Capacity Some Special Cases and Bounds z 1 m 1 Encoder h 1,1 Decoder a 1 (m 1, m 2 ) m 1 h 2,1 h 1,2 m 2 Encoder h 2,2 Decoder a 2 (m 1, m 2 ) Inverter m 2 z 2 Identity function A = I C(P, H, I) is the capacity of the K -user interference channel (Motahari et al. 2009) C(P, H) C(P, H, I) = K log(p) + o(log(p)) 4
Computation Capacity Some Special Cases and Bounds z 1 m 1 Encoder h 1,1 Decoder a 1 (m 1, m 2 ) m 1 h 2,1 h 1,2 m 2 Encoder h 2,2 Decoder a 2 (m 1, m 2 ) Inverter m 2 z 2 Identity function A = I C(P, H, I) is the capacity of the K -user interference channel (Motahari et al. 2009) C(P, H) C(P, H) C(P, H, I) lim K/2 P 1 2 log(p)
Computation Capacity Some Special Cases and Bounds z 1 m 1 Encoder h 1,1 Decoder a 1 (m 1, m 2 ) m 1 h 2,1 h 1,2 m 2 Encoder h 2,2 Decoder a 2 (m 1, m 2 ) Inverter m 2 z 2 Identity function A = I C(P, H, I) is the capacity of the K -user interference channel (Motahari et al. 2009) C(P, H) C(P, H) C(P, H, I) lim P K/2 1 2 log(p) Allow cooperation among transmitters and among receivers K K MIMO channel
Computation Capacity Some Special Cases and Bounds z 1 m 1 Encoder h 1,1 Decoder a 1 (m 1, m 2 ) m 1 h 2,1 h 1,2 m 2 Encoder h 2,2 Decoder a 2 (m 1, m 2 ) Inverter m 2 z 2 Identity function A = I C(P, H, I) is the capacity of the K -user interference channel (Motahari et al. 2009) C(P, H) C(P, H) C(P, H, I) lim P K/2 1 2 log(p) Allow cooperation among transmitters and among receivers K K MIMO channel (Telatar 1999) C(P, H) max Q 1 2 log det(i + HQHT )
Computation Capacity Some Special Cases and Bounds z 1 m 1 Encoder h 1,1 Decoder a 1 (m 1, m 2 ) m 1 h 2,1 h 1,2 m 2 Encoder h 2,2 Decoder a 2 (m 1, m 2 ) Inverter m 2 z 2 Identity function A = I C(P, H, I) is the capacity of the K -user interference channel (Motahari et al. 2009) C(P, H) C(P, H) C(P, H, I) lim K/2 P 1 2 log(p) Allow cooperation among transmitters and among receivers K K MIMO channel (Telatar 1999) 1 C(P, H) max 2 log det(i + C(P, H) Q HQHT ) lim P 1 2 log(p) K
Computation Capacity Lattice Codes Nazer and Gastpar (2009) Two transmitters, one receiver, h = (1, 1)
Computation Capacity Lattice Codes Nazer and Gastpar (2009) 1 1 Two transmitters, one receiver, h = (1, 1)
Computation Capacity Lattice Codes Nazer and Gastpar (2009) 1 1 Two transmitters, one receiver, h = (1, 1)
Computation Capacity Lattice Codes Nazer and Gastpar (2009) 1 1 Two transmitters, one receiver, h = (1, 1)
Computation Capacity Lattice Codes Nazer and Gastpar (2009) 1 1 Two transmitters, one receiver, h = (1, 1)
Computation Capacity Lattice Codes Nazer and Gastpar (2009) 1 1 Two transmitters, one receiver, h = (1, 1) Decode a = (1, 1)
Computation Capacity Lattice Codes Nazer and Gastpar (2009) 1 0.5 Two transmitters, one receiver, h = (1, 0.5)
Computation Capacity Lattice Codes Nazer and Gastpar (2009) 1 0.5 Two transmitters, one receiver, h = (1, 0.5)
Computation Capacity Lattice Codes Nazer and Gastpar (2009) 1 0.5 Two transmitters, one receiver, h = (1, 0.5) Scale output by β = 2, decode a = (2, 1)
Computation Capacity Lattice Codes Nazer and Gastpar (2009) h 1 h 2 K transmitters, one receiver, h R K
Computation Capacity Lattice Codes Nazer and Gastpar (2009) h 1 h 2 K transmitters, one receiver, h R K Scale output by β R, decode a Z K
Computation Capacity Lattice Codes Nazer and Gastpar (2009) h 1 h 2 K transmitters, one receiver, h R K Scale output by β R, decode a Z K ( ) 1 C(P, h, a) max β 2 log P β 2 + P βh a 2
Computation Capacity Lattice Codes Nazer and Gastpar (2009) h 1 h 2 K transmitters, one receiver, h R K Scale output by β R, decode a Z K ( ) C(P, h, a) 1 2 log 1 + P h 2 a 2 + P ( a 2 h 2 (h T a) 2)
Computation Capacity Lattice Codes Nazer and Gastpar (2009) h 1 h 2 K transmitters, one receiver, h R K Scale output by β R, decode a Z K ( ) C(P, h, a) 1 2 log 1 + P h 2 a 2 + P ( a 2 h 2 (h T a) 2) R L (P, h, a)
Computation Capacity Lattice Codes Nazer and Gastpar (2009) h 1 h 2 K transmitters, K receivers, H R K K Decode A Z K K
Decode A Z K K C(P, H, A) R L (P, H, A) Computation Capacity Lattice Codes Nazer and Gastpar (2009) h 1 h 2 K transmitters, K receivers, H R K K
Computation Capacity Lattice Codes Nazer and Gastpar (2009) h 1 h 2 K transmitters, K receivers, H R K K Decode A Z K K max A C(P, H, A) max R L(P, H, A) A
Decode A Z K K C(P, H) R L (P, H) Computation Capacity Lattice Codes Nazer and Gastpar (2009) h 1 h 2 K transmitters, K receivers, H R K K
Questions We already know that the computation capacity satisfies C(P, H) K/2 lim P 1 2 log(p) K
Questions We already know that the computation capacity satisfies C(P, H) K/2 lim P 1 2 log(p) K What are the degrees of freedom of compute-and-forward C(P, H) lim P 1 2 log(p) =?
Questions We already know that the computation capacity satisfies C(P, H) K/2 lim P 1 2 log(p) K What are the degrees of freedom of compute-and-forward C(P, H) lim P 1 2 log(p) =? What are the degrees of freedom achieved by lattice codes R lim L (P, H) P 1 2 log(p) =?
Performance of Lattice Codes ( ) R L (P, h, a) 1 2 log 1 + P h 2 a 2 + P ( a 2 h 2 (h T a) 2)
Performance of Lattice Codes ( ) R L (P, h, a) 1 2 log 1 + P h 2 a 2 + P ( a 2 h 2 (h T a) 2) Integer channel gains H Z K K
Performance of Lattice Codes ( ) R L (P, h, a) 1 2 log 1 + P h 2 a 2 + P ( a 2 h 2 (h T a) 2) Integer channel gains H Z K K Set A = H Z K K R L (P, H) lim P 1 2 log(p) = K
Performance of Lattice Codes ( ) R L (P, h, a) 1 2 log 1 + P h 2 a 2 + P ( a 2 h 2 (h T a) 2) Integer channel gains H Z K K Set A = H Z K K R L (P, H) lim P 1 2 log(p) = K Rational channel gains H Q K K
Performance of Lattice Codes ( ) R L (P, h, a) 1 2 log 1 + P h 2 a 2 + P ( a 2 h 2 (h T a) 2) Integer channel gains H Z K K Set A = H Z K K R L (P, H) lim P 1 2 log(p) = K Rational channel gains H Q K K Set A = qh Z K K R L (P, H) lim P 1 2 log(p) = K
Performance of Lattice Codes ( ) R L (P, h, a) 1 2 log 1 + P h 2 a 2 + P ( a 2 h 2 (h T a) 2) Integer channel gains H Z K K Set A = H Z K K R L (P, H) lim P 1 2 log(p) = K Rational channel gains H Q K K Set A = qh Z K K R L (P, H) lim P 1 2 log(p) = K Real channel gains H R K K
Performance of Lattice Codes ( ) R L (P, h, a) 1 2 log 1 + P h 2 a 2 + P ( a 2 h 2 (h T a) 2) Integer channel gains H Z K K Set A = H Z K K R L (P, H) lim P 1 2 log(p) = K Rational channel gains H Q K K Set A = qh Z K K R L (P, H) lim P 1 2 log(p) = K Real channel gains H R K K?
Performance of Lattice Codes Two transmitters, one receiver, h = (1, h 2 )
Performance of Lattice Codes Two transmitters, one receiver, h = (1, h 2 ) Optimize over coefficients a Z 2 \ {0}
Performance of Lattice Codes 1 P = 10dB 0.8 0.6 0.4 1 1 2 3 0 1 4 3 1 2 h 2 3 4 Two transmitters, one receiver, h = (1, h 2 ) Optimize over coefficients a Z 2 \ {0} max a R L (P, h, a) 1 2 log(1 + h 2 P)
Performance of Lattice Codes 1 P = 20dB 0.8 0.6 0.4 1 1 2 3 0 1 4 3 1 2 h 2 3 4 Two transmitters, one receiver, h = (1, h 2 ) Optimize over coefficients a Z 2 \ {0} max a R L (P, h, a) 1 2 log(1 + h 2 P)
Performance of Lattice Codes 1 P = 30dB 0.8 0.6 0.4 1 1 2 3 0 1 4 3 1 2 h 2 3 4 Two transmitters, one receiver, h = (1, h 2 ) Optimize over coefficients a Z 2 \ {0} max a R L (P, h, a) 1 2 log(1 + h 2 P)
Performance of Lattice Codes 1 P = 40dB 0.8 0.6 0.4 1 1 2 3 0 1 4 3 1 2 h 2 3 4 Two transmitters, one receiver, h = (1, h 2 ) Optimize over coefficients a Z 2 \ {0} max a R L (P, h, a) 1 2 log(1 + h 2 P)
Performance of Lattice Codes 1 P = 50dB 0.8 0.6 0.4 1 1 2 3 0 1 4 3 1 2 h 2 3 4 Two transmitters, one receiver, h = (1, h 2 ) Optimize over coefficients a Z 2 \ {0} max a R L (P, h, a) 1 2 log(1 + h 2 P)
Performance of Lattice Codes Theorem 1 For any K 2 and almost every H R K K R lim L (H, P) P 1 2 log(p) 2 1 + 1/K 2.
Performance of Lattice Codes Theorem 1 For any K 2 and almost every H R K K R lim L (H, P) P 1 2 log(p) 2 1 + 1/K 2. Compare to: MIMO upper bound of K on the degrees of freedom of compute-and-forward
Performance of Lattice Codes Theorem 1 For any K 2 and almost every H R K K R lim L (H, P) P 1 2 log(p) 2 1 + 1/K 2. Compare to: MIMO upper bound of K on the degrees of freedom of compute-and-forward Decode-and-forward lower bound of K/2 on the degrees of freedom compute-and-forward
Degrees of Freedom of Compute-and-Forward Is compute-and-forward useful at high SNR?
Degrees of Freedom of Compute-and-Forward Is compute-and-forward useful at high SNR? Yes!
Degrees of Freedom of Compute-and-Forward Is compute-and-forward useful at high SNR? Yes! Theorem 2 For any K 2 and almost every H R K K C(H, P) lim = K. P 1 2 log(p)
Degrees of Freedom of Compute-and-Forward Is compute-and-forward useful at high SNR? Yes! Theorem 2 For any K 2 and almost every H R K K C(H, P) lim = K. P 1 2 log(p) Compute-and-forward achieves MIMO upper bound of K degrees of freedom Invertible functions can be encoded/decoded distributedly at same asymptotic rate as the centralized scheme
Degrees of Freedom of Compute-and-Forward Is compute-and-forward useful at high SNR? Yes! Theorem 2 For any K 2 and almost every H R K K C(H, P) lim = K. P 1 2 log(p) Compute-and-forward achieves MIMO upper bound of K degrees of freedom Invertible functions can be encoded/decoded distributedly at same asymptotic rate as the centralized scheme Compute-and-forward achieves twice the degrees of freedom of decode-and-forward
Degrees of Freedom of Compute-and-Forward Is compute-and-forward useful at high SNR? Yes! Theorem 2 For any K 2 and almost every H R K K O ( log K 2 /(1+K 2) (P) ) C(H, P) 1 2K log(p) O(1). Compute-and-forward achieves MIMO upper bound of K degrees of freedom Invertible functions can be encoded/decoded distributedly at same asymptotic rate as the centralized scheme Compute-and-forward achieves twice the degrees of freedom of decode-and-forward
Lower Bound in Theorem 2 Channel computes noisy linear combinations with real coefficients
Lower Bound in Theorem 2 Channel computes noisy linear combinations with real coefficients Lattice codes transform this into a system computing noiseless linear combinations with integer coefficients
Lower Bound in Theorem 2 Channel computes noisy linear combinations with real coefficients Lattice codes transform this into a system computing noiseless linear combinations with integer coefficients The achievable scheme in Theorem 2 opts instead to implement these two functions separately
Lower Bound in Theorem 2 Channel computes noisy linear combinations with real coefficients Lattice codes transform this into a system computing noiseless linear combinations with integer coefficients The achievable scheme in Theorem 2 opts instead to implement these two functions separately Use signal alignment to transform real linear combinations into integer linear combinations
Lower Bound in Theorem 2 Channel computes noisy linear combinations with real coefficients Lattice codes transform this into a system computing noiseless linear combinations with integer coefficients The achievable scheme in Theorem 2 opts instead to implement these two functions separately Use signal alignment to transform real linear combinations into integer linear combinations Use a linear outer code to transform noisy linear combinations into noiseless linear combinations
Lower Bound in Theorem 2 Outline Channel computes noisy linear combinations with real coefficients h 1,1 m 1 + h 1,2 m 2 + z 1
Lower Bound in Theorem 2 Outline Channel computes noisy linear combinations with real coefficients h 1,1 m 1 + h 1,2 m 2 + z 1 Use signal alignment to transform real linear combinations into integer linear combinations a 1,1 m 1 + a 1,2 m 2 + z 1
Lower Bound in Theorem 2 Outline Channel computes noisy linear combinations with real coefficients h 1,1 m 1 + h 1,2 m 2 + z 1 Use signal alignment to transform real linear combinations into integer linear combinations a 1,1 m 1 + a 1,2 m 2 + z 1 Split each message into several submessages Use tools from Diophantine approximation
Lower Bound in Theorem 2 Outline Channel computes noisy linear combinations with real coefficients h 1,1 m 1 + h 1,2 m 2 + z 1 Use signal alignment to transform real linear combinations into integer linear combinations a 1,1 m 1 + a 1,2 m 2 + z 1 Split each message into several submessages Use tools from Diophantine approximation Use a linear outer code to transform noisy linear combinations into noiseless linear combinations a 1,1 m 1 + a 1,2 m 2
Lower Bound in Theorem 2 Preliminaries Groshev s Theorem For any ε > 0, and almost all (h 1, h 2,...,h K ) R K, min h 1 q 1 +... + h K q K Ω ( (max i q i ) 1 K ε). q i Z
Lower Bound in Theorem 2 Preliminaries Groshev s Theorem For any ε > 0, and almost all (h 1, h 2,...,h K ) R K, min h 1 q 1 +... + h K q K Ω ( (max i q i ) 1 K ε). q i Z Let x k = Aq k with q k { Q, Q + 1,...,Q 1, Q}
Lower Bound in Theorem 2 Preliminaries Groshev s Theorem For any ε > 0, and almost all (h 1, h 2,...,h K ) R K, min h 1 q 1 +... + h K q K Ω ( (max i q i ) 1 K ε). q i Z Let x k = Aq k with q k { Q, Q + 1,...,Q 1, Q} Assume we observe y = h 1 x 1 +... + h K x K + z
Lower Bound in Theorem 2 Preliminaries Groshev s Theorem For any ε > 0, and almost all (h 1, h 2,...,h K ) R K, min h 1 q 1 +... + h K q K Ω ( (max i q i ) 1 K ε). q i Z Let x k = Aq k with q k { Q, Q + 1,...,Q 1, Q} Assume we observe y = h 1 x 1 +... + h K x K + z Minimum distance between (x 1,...,x K ) (x 1,...,x K )
Lower Bound in Theorem 2 Preliminaries Groshev s Theorem For any ε > 0, and almost all (h 1, h 2,...,h K ) R K, min h 1 q 1 +... + h K q K Ω ( (max i q i ) 1 K ε). q i Z Let x k = Aq k with q k { Q, Q + 1,...,Q 1, Q} Assume we observe y = h 1 x 1 +... + h K x K + z Minimum distance between (x 1,...,x K ) (x 1,...,x K ) K k=1 h k(x k x k )
Lower Bound in Theorem 2 Preliminaries Groshev s Theorem For any ε > 0, and almost all (h 1, h 2,...,h K ) R K, min h 1 q 1 +... + h K q K Ω ( (max i q i ) 1 K ε). q i Z Let x k = Aq k with q k { Q, Q + 1,...,Q 1, Q} Assume we observe y = h 1 x 1 +... + h K x K + z Minimum distance between (x 1,...,x K ) (x 1,...,x K ) K k=1 h k(x k x k ) = A K k=1 h k(q k q k )
Lower Bound in Theorem 2 Preliminaries Groshev s Theorem For any ε > 0, and almost all (h 1, h 2,...,h K ) R K, min h 1 q 1 +... + h K q K Ω ( (max i q i ) 1 K ε). q i Z Let x k = Aq k with q k { Q, Q + 1,...,Q 1, Q} Assume we observe y = h 1 x 1 +... + h K x K + z Minimum distance between (x 1,...,x K ) (x 1,...,x K ) K k=1 h k(x k x k ) = A K k=1 h k(q k q k ) AΩ(Q 1 K )
Lower Bound in Theorem 2 Preliminaries Groshev s Theorem For any ε > 0, and almost all (h 1, h 2,...,h K ) R K, min h 1 q 1 +... + h K q K Ω ( (max i q i ) 1 K ε). q i Z Let x k = Aq k with q k { Q, Q + 1,...,Q 1, Q} Assume we observe y = h 1 x 1 +... + h K x K + z Minimum distance between (x 1,...,x K ) (x 1,...,x K ) K k=1 h k(x k x k ) = A K k=1 h k(q k q k ) AΩ(Q 1 K ) For A P (K 1)/2K and Q P 1/2K satisfy power constraint and can remove noise
Lower Bound in Theorem 2 Signal Alignment Consider a simple interference channel without noise y 1 = x 1 + x 2 y 2 = x 1 + hx 2
Lower Bound in Theorem 2 Signal Alignment Consider a simple interference channel without noise y 1 = x 1 + x 2 y 2 = x 1 + hx 2 Set x 1 /A = q 11 x 2 /A = q 21
Lower Bound in Theorem 2 Signal Alignment Consider a simple interference channel without noise y 1 = x 1 + x 2 y 2 = x 1 + hx 2 Set x 1 /A = q 11 x 2 /A = q 21 This is received as y 1 /A = (q 11 + q 21 ) y 2 /A = q 11 + hq 21
Lower Bound in Theorem 2 Signal Alignment Consider a simple interference channel without noise y 1 = x 1 + x 2 y 2 = x 1 + hx 2 Set x 1 /A = q 11 + hq 12 x 2 /A = q 21 + hq 22 This is received as y 1 /A = (q 11 + q 21 ) y 2 /A = q 11 + hq 21
Lower Bound in Theorem 2 Signal Alignment Consider a simple interference channel without noise y 1 = x 1 + x 2 y 2 = x 1 + hx 2 Set x 1 /A = q 11 + hq 12 x 2 /A = q 21 + hq 22 This is received as y 1 /A = (q 11 + q 21 ) + h(q 12 + q 22 ) y 2 /A = q 11 + h(q 21 + q 12 ) + h 2 q 22
Lower Bound in Theorem 2 Signal Alignment Consider a simple interference channel without noise y 1 = x 1 + x 2 y 2 = x 1 + hx 2 Set x 1 /A = q 11 + hq 12 +... x 2 /A = q 21 + hq 22 +... This is received as y 1 /A = (q 11 + q 21 ) + h(q 12 + q 22 ) +... y 2 /A = q 11 + h(q 21 + q 12 ) + h 2 q 22 +...
Lower Bound in Theorem 2 Signal Alignment Consider a simple interference channel without noise y 1 = x 1 + x 2 y 2 = x 1 + hx 2 Set x 1 /A = q 11 + hq 12 +... x 2 /A = q 21 + hq 22 +... This is received as y 1 /A = (q 11 + q 21 ) + h(q 12 + q 22 ) +... y 2 /A = q 11 + h(q 21 + q 12 ) + h 2 q 22 +... Groshev s Theorem to separate equations
Lower Bound in Theorem 2 Signal Alignment Consider a simple interference channel without noise y 1 = x 1 + x 2 y 2 = x 1 + hx 2 Set x 1 /A = q 11 + hq 12 +... x 2 /A = q 21 + hq 22 +... This is received as y 1 /A = (q 11 + q 21 ) + h(q 12 + q 22 ) +... y 2 /A = q 11 + h(q 21 + q 12 ) + h 2 q 22 +... Groshev s Theorem to separate equations Linear outer code to drive probability of error to zero
Summary Compute-and-forward as communication strategy for wireless networks
Summary Compute-and-forward as communication strategy for wireless networks Lattice codes achieve at most 2 degrees of freedom over a K K channel
Summary Compute-and-forward as communication strategy for wireless networks Lattice codes achieve at most 2 degrees of freedom over a K K channel However, a different implementation of compute-and-forward achieves K degrees of freedom
Summary Compute-and-forward as communication strategy for wireless networks Lattice codes achieve at most 2 degrees of freedom over a K K channel However, a different implementation of compute-and-forward achieves K degrees of freedom Matches MIMO upper bound of K degrees of freedom
Summary Compute-and-forward as communication strategy for wireless networks Lattice codes achieve at most 2 degrees of freedom over a K K channel However, a different implementation of compute-and-forward achieves K degrees of freedom Matches MIMO upper bound of K degrees of freedom Compute-and-forward achieves twice the degrees of freedom of decode-and-forward