The Degrees of Freedom of Compute-and-Forward



Similar documents
Degrees of Freedom in Wireless Networks

Capacity Limits of MIMO Channels

THE problems of characterizing the fundamental limits

Coded Bidirectional Relaying in Wireless Networks

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 55, NO. 1, JANUARY

8 MIMO II: capacity and multiplexing

Interference Alignment and the Degrees of Freedom of Wireless X Networks

Secret Key Generation from Reciprocal Spatially Correlated MIMO Channels,

Comparison of Network Coding and Non-Network Coding Schemes for Multi-hop Wireless Networks

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 8, AUGUST If the capacity can be expressed as C(SNR) =d log(snr)+o(log(snr))

Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel

MIMO CHANNEL CAPACITY

A New Interpretation of Information Rate

Coding and decoding with convolutional codes. The Viterbi Algor

Diversity and Degrees of Freedom in Wireless Communications

MIMO: What shall we do with all these degrees of freedom?

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

4932 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 11, NOVEMBER 2009

Linear Codes. Chapter Basics

Physical Layer Security in Wireless Communications

Full- or Half-Duplex? A Capacity Analysis with Bounded Radio Resources

Source Transmission over Relay Channel with Correlated Relay Side Information

Log-Likelihood Ratio-based Relay Selection Algorithm in Wireless Network

Signal Detection. Outline. Detection Theory. Example Applications of Detection Theory

Diversity and Multiplexing: A Fundamental Tradeoff in Multiple-Antenna Channels

On Secure Communication over Wireless Erasure Networks

Polarization codes and the rate of polarization

EECS 556 Image Processing W 09. Interpolation. Interpolation techniques B splines

CARLETON UNIVERSITY Department of Systems and Computer Engineering. SYSC4700 Telecommunications Engineering Winter Term Exam 13 February 2014

Energy Efficiency of Cooperative Jamming Strategies in Secure Wireless Networks

MIMO Antenna Systems in WinProp

Optimization in R n Introduction

EE4367 Telecom. Switching & Transmission. Prof. Murat Torlak

The Effect of Network Cabling on Bit Error Rate Performance. By Paul Kish NORDX/CDT

Teaching Convolutional Coding using MATLAB in Communication Systems Course. Abstract

Chapter 1 Introduction

Implementation of Digital Signal Processing: Some Background on GFSK Modulation

Zeros of Polynomial Functions

Chapter Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

A Practical Scheme for Wireless Network Operation

ADVANCED APPLICATIONS OF ELECTRICAL ENGINEERING

Wireless Networks with Symmetric Demands

On the Traffic Capacity of Cellular Data Networks. 1 Introduction. T. Bonald 1,2, A. Proutière 1,2

Designing Wireless Broadband Access for Energy Efficiency

On the Degrees of Freedom of time correlated MISO broadcast channel with delayed CSIT

Capacity Limits of MIMO Systems

Capacity of the Multiple Access Channel in Energy Harvesting Wireless Networks

1. (Ungraded) A noiseless 2-kHz channel is sampled every 5 ms. What is the maximum data rate?

TCOM 370 NOTES 99-4 BANDWIDTH, FREQUENCY RESPONSE, AND CAPACITY OF COMMUNICATION LINKS

ONLINE DEGREE-BOUNDED STEINER NETWORK DESIGN. Sina Dehghani Saeed Seddighin Ali Shafahi Fall 2015

AN Application Note: FCC Regulations for ISM Band Devices: MHz. FCC Regulations for ISM Band Devices: MHz

Cooperative Communication for Spatial Frequency Reuse Multihop Wireless Networks under Slow Rayleigh Fading

Degrees of Freedom for MIMO Two-Way X Relay Channel

A Numerical Study on the Wiretap Network with a Simple Network Topology

Dynamic TCP Acknowledgement: Penalizing Long Delays

Zeros of Polynomial Functions

Cooperative Communications in. Mobile Ad-Hoc Networks: Rethinking

FUNDAMENTALS of INFORMATION THEORY and CODING DESIGN

Rethinking MIMO for Wireless Networks: Linear Throughput Increases with Multiple Receive Antennas

CONTRIBUTIONS ON NETWORKING TECHNIQUES FOR WIRELESS RELAY CHANNELS. Smrati Gupta Supervisor: Dr. M. A. Vázquez-Castro

LECTURE 4. Last time: Lecture outline

Digital Modulation. David Tipper. Department of Information Science and Telecommunications University of Pittsburgh. Typical Communication System

MIMO detector algorithms and their implementations for LTE/LTE-A

Division algebras for coding in multiple antenna channels and wireless networks

Introduction to Support Vector Machines. Colin Campbell, Bristol University

Energy Benefit of Network Coding for Multiple Unicast in Wireless Networks

Lattice-Based Threshold-Changeability for Standard Shamir Secret-Sharing Schemes

Public Switched Telephone System

Vector Bin-and-Cancel for MIMO Distributed Full-Duplex

On the Efficiency of Competitive Stock Markets Where Traders Have Diverse Information

Digital Transmission of Analog Data: PCM and Delta Modulation

Half-Duplex or Full-Duplex Relaying: A Capacity Analysis under Self-Interference

An Introduction to Neural Networks

Big Data Interpolation: An Effcient Sampling Alternative for Sensor Data Aggregation

Network Coding for Distributed Storage

EECC694 - Shaaban. Transmission Channel

is the power reference: Specifically, power in db is represented by the following equation, where P0 P db = 10 log 10

Time and Frequency Domain Equalization

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

A Negative Result Concerning Explicit Matrices With The Restricted Isometry Property

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)

Zero: If P is a polynomial and if c is a number such that P (c) = 0 then c is a zero of P.

Transportation Polytopes: a Twenty year Update

Degrees of Freedom of the Interference Channel: a General Formula

2695 P a g e. IV Semester M.Tech (DCN) SJCIT Chickballapur Karnataka India

DSL Spectrum Management

Arithmetic Coding: Introduction

EDA ad hoc B program. CORASMA project COgnitive RAdio for dynamic Spectrum MAnagement Contract N B-781-IAP4-GC

Zeros of Polynomial Functions

1 Solving LPs: The Simplex Algorithm of George Dantzig

Mathematical Modelling of Computer Networks: Part II. Module 1: Network Coding

T Postgraduate Course in Theoretical Computer Science T Special Course in Mobility Management: Ad hoc networks (2-10 cr) P V

The Cooperative DPC Rate Region And Network Power Allocation

Classification - Examples

STUDY OF MUTUAL INFORMATION IN PERCEPTUAL CODING WITH APPLICATION FOR LOW BIT-RATE COMPRESSION

Optimizing the SINR operating point of spatial networks

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

Alternative proof for claim in [1]

TTT4120 Digital Signal Processing Suggested Solution to Exam Fall 2008

Speech Signal Processing: An Overview

Transcription:

The Degrees of Freedom of Compute-and-Forward Urs Niesen Jointly with Phil Whiting Bell Labs, Alcatel-Lucent

Problem Setting m 1 Encoder m 2 Encoder K transmitters, messages m 1,...,m K, power constraint P

Problem Setting z 1 m 1 Encoder h 2,1 h 1,1 h 1,2 m 2 Encoder h 2,2 z 2 K transmitters, messages m 1,...,m K, power constraint P Gaussian channel with constant gains H = {h l,k }

Problem Setting z 1 m 1 Encoder h 1,1 Decoder a 1 (m 1, m 2 ) h 2,1 h 1,2 m 2 Encoder h 2,2 Decoder a 2 (m 1, m 2 ) z 2 K transmitters, messages m 1,...,m K, power constraint P Gaussian channel with constant gains H = {h l,k } Decode any deterministic function a l of the messages m 1,...,m K

Problem Setting z 1 m 1 Encoder h 1,1 Decoder a 1 (m 1, m 2 ) m 1 h 2,1 h 1,2 m 2 Encoder h 2,2 Decoder a 2 (m 1, m 2 ) Inverter m 2 z 2 K transmitters, messages m 1,...,m K, power constraint P Gaussian channel with constant gains H = {h l,k } Decode any deterministic function a l of the messages m 1,...,m K Invert all computed functions a 1,...,a K to recover the messages m 1,...,m K

Problem Setting z 1 m 1 Encoder h 1,1 Decoder a 1 (m 1, m 2 ) m 1 h 2,1 h 1,2 m 2 Encoder h 2,2 Decoder a 2 (m 1, m 2 ) Inverter m 2 z 2 Computation capacity C(P, H, {a l }) for fixed function {a l }

Problem Setting z 1 m 1 Encoder h 1,1 Decoder a 1 (m 1, m 2 ) m 1 h 2,1 h 1,2 m 2 Encoder h 2,2 Decoder a 2 (m 1, m 2 ) Inverter m 2 z 2 Computation capacity C(P, H, {a l }) for fixed function {a l } C(P, H, A) for linear function {a l }

Problem Setting z 1 m 1 Encoder h 1,1 Decoder a 1 (m 1, m 2 ) m 1 h 2,1 h 1,2 m 2 Encoder h 2,2 Decoder a 2 (m 1, m 2 ) Inverter m 2 z 2 Computation capacity C(P, H, {a l }) for fixed function {a l } C(P, H, A) for linear function {a l } Computation capacity C(P, H) max {a l } C(P, H, {a l}) with maximization over all invertible functions {a l }

Computation Capacity Some Special Cases and Bounds z 1 m 1 Encoder h 1,1 Decoder a 1 (m 1, m 2 ) m 1 h 2,1 h 1,2 m 2 Encoder h 2,2 Decoder a 2 (m 1, m 2 ) Inverter m 2 z 2 Identity function A = I C(P, H, I) is the capacity of the K -user interference channel

Computation Capacity Some Special Cases and Bounds z 1 m 1 Encoder h 1,1 Decoder a 1 (m 1, m 2 ) m 1 h 2,1 h 1,2 m 2 Encoder h 2,2 Decoder a 2 (m 1, m 2 ) Inverter m 2 z 2 Identity function A = I C(P, H, I) is the capacity of the K -user interference channel C(P, H) C(P, H, I)

Computation Capacity Some Special Cases and Bounds z 1 m 1 Encoder h 1,1 Decoder a 1 (m 1, m 2 ) m 1 h 2,1 h 1,2 m 2 Encoder h 2,2 Decoder a 2 (m 1, m 2 ) Inverter m 2 z 2 Identity function A = I C(P, H, I) is the capacity of the K -user interference channel (Motahari et al. 2009) C(P, H) C(P, H, I) = K log(p) + o(log(p)) 4

Computation Capacity Some Special Cases and Bounds z 1 m 1 Encoder h 1,1 Decoder a 1 (m 1, m 2 ) m 1 h 2,1 h 1,2 m 2 Encoder h 2,2 Decoder a 2 (m 1, m 2 ) Inverter m 2 z 2 Identity function A = I C(P, H, I) is the capacity of the K -user interference channel (Motahari et al. 2009) C(P, H) C(P, H) C(P, H, I) lim K/2 P 1 2 log(p)

Computation Capacity Some Special Cases and Bounds z 1 m 1 Encoder h 1,1 Decoder a 1 (m 1, m 2 ) m 1 h 2,1 h 1,2 m 2 Encoder h 2,2 Decoder a 2 (m 1, m 2 ) Inverter m 2 z 2 Identity function A = I C(P, H, I) is the capacity of the K -user interference channel (Motahari et al. 2009) C(P, H) C(P, H) C(P, H, I) lim P K/2 1 2 log(p) Allow cooperation among transmitters and among receivers K K MIMO channel

Computation Capacity Some Special Cases and Bounds z 1 m 1 Encoder h 1,1 Decoder a 1 (m 1, m 2 ) m 1 h 2,1 h 1,2 m 2 Encoder h 2,2 Decoder a 2 (m 1, m 2 ) Inverter m 2 z 2 Identity function A = I C(P, H, I) is the capacity of the K -user interference channel (Motahari et al. 2009) C(P, H) C(P, H) C(P, H, I) lim P K/2 1 2 log(p) Allow cooperation among transmitters and among receivers K K MIMO channel (Telatar 1999) C(P, H) max Q 1 2 log det(i + HQHT )

Computation Capacity Some Special Cases and Bounds z 1 m 1 Encoder h 1,1 Decoder a 1 (m 1, m 2 ) m 1 h 2,1 h 1,2 m 2 Encoder h 2,2 Decoder a 2 (m 1, m 2 ) Inverter m 2 z 2 Identity function A = I C(P, H, I) is the capacity of the K -user interference channel (Motahari et al. 2009) C(P, H) C(P, H) C(P, H, I) lim K/2 P 1 2 log(p) Allow cooperation among transmitters and among receivers K K MIMO channel (Telatar 1999) 1 C(P, H) max 2 log det(i + C(P, H) Q HQHT ) lim P 1 2 log(p) K

Computation Capacity Lattice Codes Nazer and Gastpar (2009) Two transmitters, one receiver, h = (1, 1)

Computation Capacity Lattice Codes Nazer and Gastpar (2009) 1 1 Two transmitters, one receiver, h = (1, 1)

Computation Capacity Lattice Codes Nazer and Gastpar (2009) 1 1 Two transmitters, one receiver, h = (1, 1)

Computation Capacity Lattice Codes Nazer and Gastpar (2009) 1 1 Two transmitters, one receiver, h = (1, 1)

Computation Capacity Lattice Codes Nazer and Gastpar (2009) 1 1 Two transmitters, one receiver, h = (1, 1)

Computation Capacity Lattice Codes Nazer and Gastpar (2009) 1 1 Two transmitters, one receiver, h = (1, 1) Decode a = (1, 1)

Computation Capacity Lattice Codes Nazer and Gastpar (2009) 1 0.5 Two transmitters, one receiver, h = (1, 0.5)

Computation Capacity Lattice Codes Nazer and Gastpar (2009) 1 0.5 Two transmitters, one receiver, h = (1, 0.5)

Computation Capacity Lattice Codes Nazer and Gastpar (2009) 1 0.5 Two transmitters, one receiver, h = (1, 0.5) Scale output by β = 2, decode a = (2, 1)

Computation Capacity Lattice Codes Nazer and Gastpar (2009) h 1 h 2 K transmitters, one receiver, h R K

Computation Capacity Lattice Codes Nazer and Gastpar (2009) h 1 h 2 K transmitters, one receiver, h R K Scale output by β R, decode a Z K

Computation Capacity Lattice Codes Nazer and Gastpar (2009) h 1 h 2 K transmitters, one receiver, h R K Scale output by β R, decode a Z K ( ) 1 C(P, h, a) max β 2 log P β 2 + P βh a 2

Computation Capacity Lattice Codes Nazer and Gastpar (2009) h 1 h 2 K transmitters, one receiver, h R K Scale output by β R, decode a Z K ( ) C(P, h, a) 1 2 log 1 + P h 2 a 2 + P ( a 2 h 2 (h T a) 2)

Computation Capacity Lattice Codes Nazer and Gastpar (2009) h 1 h 2 K transmitters, one receiver, h R K Scale output by β R, decode a Z K ( ) C(P, h, a) 1 2 log 1 + P h 2 a 2 + P ( a 2 h 2 (h T a) 2) R L (P, h, a)

Computation Capacity Lattice Codes Nazer and Gastpar (2009) h 1 h 2 K transmitters, K receivers, H R K K Decode A Z K K

Decode A Z K K C(P, H, A) R L (P, H, A) Computation Capacity Lattice Codes Nazer and Gastpar (2009) h 1 h 2 K transmitters, K receivers, H R K K

Computation Capacity Lattice Codes Nazer and Gastpar (2009) h 1 h 2 K transmitters, K receivers, H R K K Decode A Z K K max A C(P, H, A) max R L(P, H, A) A

Decode A Z K K C(P, H) R L (P, H) Computation Capacity Lattice Codes Nazer and Gastpar (2009) h 1 h 2 K transmitters, K receivers, H R K K

Questions We already know that the computation capacity satisfies C(P, H) K/2 lim P 1 2 log(p) K

Questions We already know that the computation capacity satisfies C(P, H) K/2 lim P 1 2 log(p) K What are the degrees of freedom of compute-and-forward C(P, H) lim P 1 2 log(p) =?

Questions We already know that the computation capacity satisfies C(P, H) K/2 lim P 1 2 log(p) K What are the degrees of freedom of compute-and-forward C(P, H) lim P 1 2 log(p) =? What are the degrees of freedom achieved by lattice codes R lim L (P, H) P 1 2 log(p) =?

Performance of Lattice Codes ( ) R L (P, h, a) 1 2 log 1 + P h 2 a 2 + P ( a 2 h 2 (h T a) 2)

Performance of Lattice Codes ( ) R L (P, h, a) 1 2 log 1 + P h 2 a 2 + P ( a 2 h 2 (h T a) 2) Integer channel gains H Z K K

Performance of Lattice Codes ( ) R L (P, h, a) 1 2 log 1 + P h 2 a 2 + P ( a 2 h 2 (h T a) 2) Integer channel gains H Z K K Set A = H Z K K R L (P, H) lim P 1 2 log(p) = K

Performance of Lattice Codes ( ) R L (P, h, a) 1 2 log 1 + P h 2 a 2 + P ( a 2 h 2 (h T a) 2) Integer channel gains H Z K K Set A = H Z K K R L (P, H) lim P 1 2 log(p) = K Rational channel gains H Q K K

Performance of Lattice Codes ( ) R L (P, h, a) 1 2 log 1 + P h 2 a 2 + P ( a 2 h 2 (h T a) 2) Integer channel gains H Z K K Set A = H Z K K R L (P, H) lim P 1 2 log(p) = K Rational channel gains H Q K K Set A = qh Z K K R L (P, H) lim P 1 2 log(p) = K

Performance of Lattice Codes ( ) R L (P, h, a) 1 2 log 1 + P h 2 a 2 + P ( a 2 h 2 (h T a) 2) Integer channel gains H Z K K Set A = H Z K K R L (P, H) lim P 1 2 log(p) = K Rational channel gains H Q K K Set A = qh Z K K R L (P, H) lim P 1 2 log(p) = K Real channel gains H R K K

Performance of Lattice Codes ( ) R L (P, h, a) 1 2 log 1 + P h 2 a 2 + P ( a 2 h 2 (h T a) 2) Integer channel gains H Z K K Set A = H Z K K R L (P, H) lim P 1 2 log(p) = K Rational channel gains H Q K K Set A = qh Z K K R L (P, H) lim P 1 2 log(p) = K Real channel gains H R K K?

Performance of Lattice Codes Two transmitters, one receiver, h = (1, h 2 )

Performance of Lattice Codes Two transmitters, one receiver, h = (1, h 2 ) Optimize over coefficients a Z 2 \ {0}

Performance of Lattice Codes 1 P = 10dB 0.8 0.6 0.4 1 1 2 3 0 1 4 3 1 2 h 2 3 4 Two transmitters, one receiver, h = (1, h 2 ) Optimize over coefficients a Z 2 \ {0} max a R L (P, h, a) 1 2 log(1 + h 2 P)

Performance of Lattice Codes 1 P = 20dB 0.8 0.6 0.4 1 1 2 3 0 1 4 3 1 2 h 2 3 4 Two transmitters, one receiver, h = (1, h 2 ) Optimize over coefficients a Z 2 \ {0} max a R L (P, h, a) 1 2 log(1 + h 2 P)

Performance of Lattice Codes 1 P = 30dB 0.8 0.6 0.4 1 1 2 3 0 1 4 3 1 2 h 2 3 4 Two transmitters, one receiver, h = (1, h 2 ) Optimize over coefficients a Z 2 \ {0} max a R L (P, h, a) 1 2 log(1 + h 2 P)

Performance of Lattice Codes 1 P = 40dB 0.8 0.6 0.4 1 1 2 3 0 1 4 3 1 2 h 2 3 4 Two transmitters, one receiver, h = (1, h 2 ) Optimize over coefficients a Z 2 \ {0} max a R L (P, h, a) 1 2 log(1 + h 2 P)

Performance of Lattice Codes 1 P = 50dB 0.8 0.6 0.4 1 1 2 3 0 1 4 3 1 2 h 2 3 4 Two transmitters, one receiver, h = (1, h 2 ) Optimize over coefficients a Z 2 \ {0} max a R L (P, h, a) 1 2 log(1 + h 2 P)

Performance of Lattice Codes Theorem 1 For any K 2 and almost every H R K K R lim L (H, P) P 1 2 log(p) 2 1 + 1/K 2.

Performance of Lattice Codes Theorem 1 For any K 2 and almost every H R K K R lim L (H, P) P 1 2 log(p) 2 1 + 1/K 2. Compare to: MIMO upper bound of K on the degrees of freedom of compute-and-forward

Performance of Lattice Codes Theorem 1 For any K 2 and almost every H R K K R lim L (H, P) P 1 2 log(p) 2 1 + 1/K 2. Compare to: MIMO upper bound of K on the degrees of freedom of compute-and-forward Decode-and-forward lower bound of K/2 on the degrees of freedom compute-and-forward

Degrees of Freedom of Compute-and-Forward Is compute-and-forward useful at high SNR?

Degrees of Freedom of Compute-and-Forward Is compute-and-forward useful at high SNR? Yes!

Degrees of Freedom of Compute-and-Forward Is compute-and-forward useful at high SNR? Yes! Theorem 2 For any K 2 and almost every H R K K C(H, P) lim = K. P 1 2 log(p)

Degrees of Freedom of Compute-and-Forward Is compute-and-forward useful at high SNR? Yes! Theorem 2 For any K 2 and almost every H R K K C(H, P) lim = K. P 1 2 log(p) Compute-and-forward achieves MIMO upper bound of K degrees of freedom Invertible functions can be encoded/decoded distributedly at same asymptotic rate as the centralized scheme

Degrees of Freedom of Compute-and-Forward Is compute-and-forward useful at high SNR? Yes! Theorem 2 For any K 2 and almost every H R K K C(H, P) lim = K. P 1 2 log(p) Compute-and-forward achieves MIMO upper bound of K degrees of freedom Invertible functions can be encoded/decoded distributedly at same asymptotic rate as the centralized scheme Compute-and-forward achieves twice the degrees of freedom of decode-and-forward

Degrees of Freedom of Compute-and-Forward Is compute-and-forward useful at high SNR? Yes! Theorem 2 For any K 2 and almost every H R K K O ( log K 2 /(1+K 2) (P) ) C(H, P) 1 2K log(p) O(1). Compute-and-forward achieves MIMO upper bound of K degrees of freedom Invertible functions can be encoded/decoded distributedly at same asymptotic rate as the centralized scheme Compute-and-forward achieves twice the degrees of freedom of decode-and-forward

Lower Bound in Theorem 2 Channel computes noisy linear combinations with real coefficients

Lower Bound in Theorem 2 Channel computes noisy linear combinations with real coefficients Lattice codes transform this into a system computing noiseless linear combinations with integer coefficients

Lower Bound in Theorem 2 Channel computes noisy linear combinations with real coefficients Lattice codes transform this into a system computing noiseless linear combinations with integer coefficients The achievable scheme in Theorem 2 opts instead to implement these two functions separately

Lower Bound in Theorem 2 Channel computes noisy linear combinations with real coefficients Lattice codes transform this into a system computing noiseless linear combinations with integer coefficients The achievable scheme in Theorem 2 opts instead to implement these two functions separately Use signal alignment to transform real linear combinations into integer linear combinations

Lower Bound in Theorem 2 Channel computes noisy linear combinations with real coefficients Lattice codes transform this into a system computing noiseless linear combinations with integer coefficients The achievable scheme in Theorem 2 opts instead to implement these two functions separately Use signal alignment to transform real linear combinations into integer linear combinations Use a linear outer code to transform noisy linear combinations into noiseless linear combinations

Lower Bound in Theorem 2 Outline Channel computes noisy linear combinations with real coefficients h 1,1 m 1 + h 1,2 m 2 + z 1

Lower Bound in Theorem 2 Outline Channel computes noisy linear combinations with real coefficients h 1,1 m 1 + h 1,2 m 2 + z 1 Use signal alignment to transform real linear combinations into integer linear combinations a 1,1 m 1 + a 1,2 m 2 + z 1

Lower Bound in Theorem 2 Outline Channel computes noisy linear combinations with real coefficients h 1,1 m 1 + h 1,2 m 2 + z 1 Use signal alignment to transform real linear combinations into integer linear combinations a 1,1 m 1 + a 1,2 m 2 + z 1 Split each message into several submessages Use tools from Diophantine approximation

Lower Bound in Theorem 2 Outline Channel computes noisy linear combinations with real coefficients h 1,1 m 1 + h 1,2 m 2 + z 1 Use signal alignment to transform real linear combinations into integer linear combinations a 1,1 m 1 + a 1,2 m 2 + z 1 Split each message into several submessages Use tools from Diophantine approximation Use a linear outer code to transform noisy linear combinations into noiseless linear combinations a 1,1 m 1 + a 1,2 m 2

Lower Bound in Theorem 2 Preliminaries Groshev s Theorem For any ε > 0, and almost all (h 1, h 2,...,h K ) R K, min h 1 q 1 +... + h K q K Ω ( (max i q i ) 1 K ε). q i Z

Lower Bound in Theorem 2 Preliminaries Groshev s Theorem For any ε > 0, and almost all (h 1, h 2,...,h K ) R K, min h 1 q 1 +... + h K q K Ω ( (max i q i ) 1 K ε). q i Z Let x k = Aq k with q k { Q, Q + 1,...,Q 1, Q}

Lower Bound in Theorem 2 Preliminaries Groshev s Theorem For any ε > 0, and almost all (h 1, h 2,...,h K ) R K, min h 1 q 1 +... + h K q K Ω ( (max i q i ) 1 K ε). q i Z Let x k = Aq k with q k { Q, Q + 1,...,Q 1, Q} Assume we observe y = h 1 x 1 +... + h K x K + z

Lower Bound in Theorem 2 Preliminaries Groshev s Theorem For any ε > 0, and almost all (h 1, h 2,...,h K ) R K, min h 1 q 1 +... + h K q K Ω ( (max i q i ) 1 K ε). q i Z Let x k = Aq k with q k { Q, Q + 1,...,Q 1, Q} Assume we observe y = h 1 x 1 +... + h K x K + z Minimum distance between (x 1,...,x K ) (x 1,...,x K )

Lower Bound in Theorem 2 Preliminaries Groshev s Theorem For any ε > 0, and almost all (h 1, h 2,...,h K ) R K, min h 1 q 1 +... + h K q K Ω ( (max i q i ) 1 K ε). q i Z Let x k = Aq k with q k { Q, Q + 1,...,Q 1, Q} Assume we observe y = h 1 x 1 +... + h K x K + z Minimum distance between (x 1,...,x K ) (x 1,...,x K ) K k=1 h k(x k x k )

Lower Bound in Theorem 2 Preliminaries Groshev s Theorem For any ε > 0, and almost all (h 1, h 2,...,h K ) R K, min h 1 q 1 +... + h K q K Ω ( (max i q i ) 1 K ε). q i Z Let x k = Aq k with q k { Q, Q + 1,...,Q 1, Q} Assume we observe y = h 1 x 1 +... + h K x K + z Minimum distance between (x 1,...,x K ) (x 1,...,x K ) K k=1 h k(x k x k ) = A K k=1 h k(q k q k )

Lower Bound in Theorem 2 Preliminaries Groshev s Theorem For any ε > 0, and almost all (h 1, h 2,...,h K ) R K, min h 1 q 1 +... + h K q K Ω ( (max i q i ) 1 K ε). q i Z Let x k = Aq k with q k { Q, Q + 1,...,Q 1, Q} Assume we observe y = h 1 x 1 +... + h K x K + z Minimum distance between (x 1,...,x K ) (x 1,...,x K ) K k=1 h k(x k x k ) = A K k=1 h k(q k q k ) AΩ(Q 1 K )

Lower Bound in Theorem 2 Preliminaries Groshev s Theorem For any ε > 0, and almost all (h 1, h 2,...,h K ) R K, min h 1 q 1 +... + h K q K Ω ( (max i q i ) 1 K ε). q i Z Let x k = Aq k with q k { Q, Q + 1,...,Q 1, Q} Assume we observe y = h 1 x 1 +... + h K x K + z Minimum distance between (x 1,...,x K ) (x 1,...,x K ) K k=1 h k(x k x k ) = A K k=1 h k(q k q k ) AΩ(Q 1 K ) For A P (K 1)/2K and Q P 1/2K satisfy power constraint and can remove noise

Lower Bound in Theorem 2 Signal Alignment Consider a simple interference channel without noise y 1 = x 1 + x 2 y 2 = x 1 + hx 2

Lower Bound in Theorem 2 Signal Alignment Consider a simple interference channel without noise y 1 = x 1 + x 2 y 2 = x 1 + hx 2 Set x 1 /A = q 11 x 2 /A = q 21

Lower Bound in Theorem 2 Signal Alignment Consider a simple interference channel without noise y 1 = x 1 + x 2 y 2 = x 1 + hx 2 Set x 1 /A = q 11 x 2 /A = q 21 This is received as y 1 /A = (q 11 + q 21 ) y 2 /A = q 11 + hq 21

Lower Bound in Theorem 2 Signal Alignment Consider a simple interference channel without noise y 1 = x 1 + x 2 y 2 = x 1 + hx 2 Set x 1 /A = q 11 + hq 12 x 2 /A = q 21 + hq 22 This is received as y 1 /A = (q 11 + q 21 ) y 2 /A = q 11 + hq 21

Lower Bound in Theorem 2 Signal Alignment Consider a simple interference channel without noise y 1 = x 1 + x 2 y 2 = x 1 + hx 2 Set x 1 /A = q 11 + hq 12 x 2 /A = q 21 + hq 22 This is received as y 1 /A = (q 11 + q 21 ) + h(q 12 + q 22 ) y 2 /A = q 11 + h(q 21 + q 12 ) + h 2 q 22

Lower Bound in Theorem 2 Signal Alignment Consider a simple interference channel without noise y 1 = x 1 + x 2 y 2 = x 1 + hx 2 Set x 1 /A = q 11 + hq 12 +... x 2 /A = q 21 + hq 22 +... This is received as y 1 /A = (q 11 + q 21 ) + h(q 12 + q 22 ) +... y 2 /A = q 11 + h(q 21 + q 12 ) + h 2 q 22 +...

Lower Bound in Theorem 2 Signal Alignment Consider a simple interference channel without noise y 1 = x 1 + x 2 y 2 = x 1 + hx 2 Set x 1 /A = q 11 + hq 12 +... x 2 /A = q 21 + hq 22 +... This is received as y 1 /A = (q 11 + q 21 ) + h(q 12 + q 22 ) +... y 2 /A = q 11 + h(q 21 + q 12 ) + h 2 q 22 +... Groshev s Theorem to separate equations

Lower Bound in Theorem 2 Signal Alignment Consider a simple interference channel without noise y 1 = x 1 + x 2 y 2 = x 1 + hx 2 Set x 1 /A = q 11 + hq 12 +... x 2 /A = q 21 + hq 22 +... This is received as y 1 /A = (q 11 + q 21 ) + h(q 12 + q 22 ) +... y 2 /A = q 11 + h(q 21 + q 12 ) + h 2 q 22 +... Groshev s Theorem to separate equations Linear outer code to drive probability of error to zero

Summary Compute-and-forward as communication strategy for wireless networks

Summary Compute-and-forward as communication strategy for wireless networks Lattice codes achieve at most 2 degrees of freedom over a K K channel

Summary Compute-and-forward as communication strategy for wireless networks Lattice codes achieve at most 2 degrees of freedom over a K K channel However, a different implementation of compute-and-forward achieves K degrees of freedom

Summary Compute-and-forward as communication strategy for wireless networks Lattice codes achieve at most 2 degrees of freedom over a K K channel However, a different implementation of compute-and-forward achieves K degrees of freedom Matches MIMO upper bound of K degrees of freedom

Summary Compute-and-forward as communication strategy for wireless networks Lattice codes achieve at most 2 degrees of freedom over a K K channel However, a different implementation of compute-and-forward achieves K degrees of freedom Matches MIMO upper bound of K degrees of freedom Compute-and-forward achieves twice the degrees of freedom of decode-and-forward