, pp. 27-216 http://dx.doi.org/1.14257/ijseia.215.9.8.19 Performance Evaluation of Dynamic Bandwidth Allocation Algorithm for XG-PON Using Traffic Monitoring and Intra- Scheduling Man Soo Han Optical Engineering Research Institute, Mokpo National University Republic of Korea mshan@mokpo.ac.kr Abstract An XG-PON (1-Gbps-capable passive optical network) system is comprised of an OLT (optical line termination) and multiple ONUs (optical network units). In general, ONUs report their requests to the OLT and then the OLT performs a dynamic bandwidth allocation operation to allocate non-overlapping transmission slots to ONUs. In this paper, we consider an XG-PON system that ONUs do not report their requests to. The OLT estimates the ONU status by monitoring the upstream bandwidth usage of the ONU. We propose a dynamic bandwidth allocation method for the XG-PON system in which ONUs do not report their reports. The proposed method allocates a bandwidth to an ONU only if the ONU has fully used its upstream slot. Also, the proposed method periodically allocates a probe bandwidth to an ONU to prevent service starvation. Using simulation, we evaluate the performance of the proposed method under balanced and unbalanced traffic. Keywords: XG-PON, DBA, traffic monitoring, performance evaluation 1. Introduction XG-PON (1-Gbps-capable passive optical network) system consists of an OLT (optical line termination) and multiple ONUs (optical network units). To support QoS (quality of service), an ONU maintains multiple queues for multiple service classes. In the XG-PON technology, a service class is known as a T-CONT (transmission container) type. When the OLT sends packets to ONUs, a broadcasting mechanism is used. All ONUs receive the same packet from the OLT and accepts this only if the destination of the packet is matched. Otherwise, the packet is discarded. When ONUs send packets to the OLT, only one ONU is allowed to transmit a packet to the OLT at a time. If more than two ONUs transmit packets to the OLT at the same time, a collision occurs between the packets. To prevent this collision, the OLT performs a DBA (dynamic bandwidth allocation) to allocate the transmission time slot to each ONU [1]-[4]. For the DBA, the OLT has to know the request status of each queue of each ONU. Two methods can be used to know the queue status. The first method is an SR (status report) method wherein an ONU explicitly reports its queue status using a DBRu (dynamic bandwidth report upstream) field. The second method is a TM (traffic monitoring) method wherein an ONU does not report its queue status but the OLT estimates the queue status from the bandwidth usage of an ONU. The SR method consumes the upstream bandwidth for the DBRu field. The OLT has to allocate upstream bandwidth not only for data packets but also for the DBRu field for an ONU. Also the OLT has to manage the report status of all ONUs for the DBA operation [5]-[9]. In [5], a pipelined DBA scheme was introduced for XG-PON to save energy. The pipelined DBA consists of four pipeline stages to decrease the DBA operation speed. The energy saving is increased as the DBA operation speed is decreased. The SR method is ISSN: 1738-9984 IJSEIA Copyright c 215 SERSC
used and the reports are forwarded from the first stage to the next stages to increase the DBA efficiency. In [6], the OLT allocates the DBRu field to a queue whenever a grant is allocated to the queue. This scheme increases the upstream bandwidth consumption. In [7] and [8], DBA methods are developed for a TWDM (time and wavelength division multiplexing) PON system that uses a multiple upstream wavelengths. These methods also use the SR method to collect the queue status of each ONU. The TM method is relatively simple because the OLT does not need to manage the report status. To the best of our knowledge, the TM method has never been studied despite the simplicity of the TM method [1]. In this paper, we propose a new DBA scheme that uses the TM method. The OLT monitors the bandwidth usage of an entire ONU to decrease an estimation error. Also, a grant is allocated to an ONU. The ONU performs an intra-scheduling to allocate the grant slot to its queues based on the T-CONT types. 2. Traffic Monitoring An XG-PON system consists of a single OLT and N ONUs. To support multiple T- CONT types, ONU has multiple queues. T-CONT types 2, 3 and 4 are considered in this paper. Since static bandwidth allocation is used for T-CONT type 1, we do not contemplate T-CONT type 1 in this paper. Figure 1 shows the XG-PON system. R N is the upstream bandwidth of an ONU and R U is the user input bandwidth of an ONU. The symbol q ij means a queue in ONU i with T-CONT type j. A packet arrived at an ONU from users is saved to a queue based on the T-CONT type of the packet. In upstream direction, only one ONU can transmit a packet to the OLT at a time. If two or more ONUs transmit packets to the OLT at the same time, a collision occurs. The OLT performs a DBA operation to avoid the collision. In this paper, each ONU does not report the requests of its queues to the OLT. Instead of receiving requests, the OLT monitors the bandwidth usage of each ONU to estimate the status of each ONU. ONU 1 q 12 R N q 13 R U q 14 ONU 2 q 22 OLT R N R N q 23 R U splitter q 24... ONU N R N q N2 q N3 R U q N4 Figure 1. XG-PON System Every operation of an XG-PON system is synchronized with a frame duration (FD) of 125 μ s. In each FD, the OLT performs a DBA operation to produce the DBA result and 28 Copyright c 215 SERSC
then makes a bandwidth map (BWmap) to notify the DBA result to each ONU. During the DBA operation, the total size of the transmission slots is limited by the size of the FD. It is obvious that an estimation error exists when we estimate the status of a queue of an ONU by monitoring the bandwidth usage of the queue. The estimation error will cause the inefficiency of DBA operation. To decrease the error and to minimize the effect of the error, we monitor the bandwidth usage of an entire ONU instead of a single queue. The OLT estimates the total requests of all queues of an ONU and then produces a grant for the ONU. Then the ONU schedules the grant to its queues. In the proposed method, ONU i has a service parameter A(i), and a timer T(i). The parameter A(i) is the allocation bytes that can be allocated to ONU i in each FD and the timer T(i) is the probe interval of ONU i in the unit of FD. Let G(i) be the grant for ONU i. Also suppose U(i) is the used portion of G(i) by ONU i. There is a time gap between G(i) and U(i). The OLT can check the used portion U(i) when the OLT receives the frames from ONU i after the grant G(i) is delivered to the ONU i. The time gap is 2 FDs if the distance between the OLT and an ONU is 2 km. The proposed algorithm is based on the two facts: (i) if U(i) = G(i) then, the grant was not sufficient to serve all waiting frames and ONU i has remaining frames, (ii) if U(i) < G(i) then, the grant size was greater than the total size of all waiting frames and ONU i may or may not have newly arrived frames. In the fact (i), for simplicity, we ignore the case that the grant G(i) is the same as the size of the waiting frames. In this case, ONU i may not have remaining frames. In the first stage of the proposed algorithm, we allocate a grant for each ONU based on the used portion U(i). For the case of the fact (i), we allocate A(i) for the grant of ONU i since ONU i has remaining frames. For the case of the fact (ii), we do not allocate any grant for ONU i since it is not clear whether or not ONU i has a frame. if (U(i) = G(i) and U(i) > ) { G(i) = A(i); else if (F(i) = 1) { G(i) = P(i); F(i) = ; else { G(i) = ; R = R G(i); U(i) = ; T(i) --; if (T(i) == ) { F(i) = 1; T(i) = S(i); Figure 2. Pseudo code of first stage In the case of the fact (i), if U(i) = G(i) =, that means no grant was allocated to ONU i. In this case, we do not allocate any new grant to ONU i since we do not have any information for ONU i. Once ONU i gets no grant, i.e., G(i) =, ONU i will never get a grant again. To prevent this starvation, we use the timer T(i) and a flag F(i). If F(i) = 1 and G(i) =, then we allocate P(i) for the grant of ONU i where P(i) is probe bytes that is used to probe the status of ONU i. In this paper, we assume that P(i) < A(i). Once P(i) is Copyright c 215 SERSC 29
allocated to ONU i, we set F(i) =. The timer T(i) is decreased by 1 in every FD. When the timer T(i) has expired, the flag F(i) is set to 1 and T(i) is recharged to S(i) where S(i) is a probing interval of ONU i. The pseudo code of the first stage is given in Figure 2 where the variable R means the remaining upstream bytes. The initial value of R depends on the upstream bandwidth. For example, the initial value of R is 38,88 bytes if the upstream bandwidth is 2.5Gbps. We illustrate an example for the first stage of the proposed method. We consider a system with 4 ONUs and assume A(i) = 4 and P(i) = 64 for all i. Figure 3 depicts the example. For ONU 1, U(i) = G(i) = 4 bytes in Figure 3. Since U(i) >, the new G(1) is given by A(1) = 4. For ONU 2, we have U(i) < G(i) in Figure 3. This is the case of the fact (ii). Therefore, the new G(2) will be bytes. For ONU 3, U(i) = G(i) = bytes but F(i) = 1. The new G(3) will be P(3) = 64 bytes since F(3) = 1. Finally, the new G(4) will be bytes since U(4) = G(4) = bytes and F(4) =. New G(i) U(i) G(i) F(i) i=1 4 4 4 i=2 5 4 i=3 64 1 i=4 Figure 3. Example of first stage We now explain the second stage of the proposed algorithm. The remaining scheduling byte R after the first stage is used in the second stage. In the second stage, the remaining scheduling byte R is fairly allocated to all ONUs. That is G(i) = G(i) + R/N for all ONU i where N is the number of ONUs. For guaranteeing the service fairness among ONUs, the starting ONU number of scheduling operation is updated in each FD. For example, if the scheduling operation starts from ONU 1 in the current FD, then the starting ONU number will be 2 in the scheduling operation of the next FD. Figure 4 shows the pseudo code of the entire proposed method including the second stage and the starting ONU number update part. In Figure 4, the first stage for ONU i represents the pseudo code of Figure 2. Also, the variable start_onu means the scheduling starting ONU number. When ONU i receives a grant G(i), the priority for the grant is in the order of T-CONT types 2, 3 and 4. First, the queue of T-CONT type 2 transmits its packets using the grant G(i) slot. Then the queue of T-CONT type 3 can send its packets using the remainder of the grant G(i). Finally, the queue of T-CONT type 4 can dispatch its packets using the remainder of the grant G(i). for (i = start_onu; i <= N; i++){ first stage for ONU i; for (i = 1; i <= start_onu 1; i++){ first stage for ONU i; 21 Copyright c 215 SERSC
// second stage pseudo code for (i = 1; i <= N; i++){ G(i) = G(i) + R/N; // starting ONU number update start_onu = start_onu + 1; if (start_onu > N){ start_onu = 1; Figure 4. Pseudo code of entire process of proposed method 3. Performance Evaluation In this section, we evaluate performance of the proposed algorithm under balanced and unbalanced traffic. We consider an XG-PON system with N = 16 and assume that the upstream bandwidth R N is 2.5 Gbps. The maximum distance between the OLT and an ONU is 2 Km. In each ONU, the size of queue for each T-CONT type is 2 Mbytes, and the input line rate R U is 2 Mbps. We set A(i) = 396 bytes, and P(i) = 64 bytes and S(i) = 5 FDs for ONU i. For performance comparison, we compare the proposed method with an immediate allocation with colorless grant (IACG) algorithm of [5][9]. For the IACG method we set AB = 7,812 and SI = 5, which is equivalent to 1 Mbps for T-CONT type 2 where AB means the available service bytes (ABs) during a service interval (SI). The SI has the unit of the FD. For T-CONT type 3, we set AB = 15,624 and SI = 1, which means 1 Mbps is given to the T-CONT type 3. For T-CONT type 4, we set AB = 15,624 and SI = 1, which is equivalent to 1 Mbps. We do not use the colorless grant scheme of the IACG method for comparison purpose. We use the self-similar traffic model of [5] that each ONU is fed by a number of Pareto distributed on-off processes. The shape parameters for the on and off processes are set to 1.4 and 1.2, respectively. For the packet size distribution, we use the tri-modal distribution where the packet sizes are 64, 5, and 15 bytes and their load fractions are 6%, 2% and 2%, respectively, as in [5]. First, we contemplate balanced traffic. Traffic is balanced if the traffic load rates of all ONUs have an identical traffic load. Also, each T-CONT type has an identical load fraction. That is, a probability that an incoming packet belongs to a T-CONT type is equal for all T-CONT types. Increasing the input load rates of each ONU from.1 to.99, we evaluate performance of each algorithm. For each plot point, simulation is performed until the total number of packets transmitted by ONUs exceeds 1 9 for each algorithm. Figure 5 shows mean delay performance for T-CONT type 2. In Figure 5, TM stands for the proposed method. The mean delays for T-CONT types 3 and 4 are delineated in Figure 6. Figure 7 shows a delay variance for T-CONT type 2. The delay variances for T- CONT types 3 and 4 are delineated in Figure 8. As we can see from Figures, the proposed method is better than the IACG method in T-CONT type 2 while it is worse than the IACG method in T-CONT types 3 and 4. The main reason is that the queue of the T- CONT type 2 has the highest priority in the intra-scheduling of an ONU. Using the traffic monitoring, an OLT can only know whether or not an ONU has waiting packets. The OLT does not know the exact amount of the waiting packets so that performance of the proposed method for T-CONT types 3 and 4 are worse than that of the IACG method. Copyright c 215 SERSC 211
Figure 5. Mean delay for T-CONT type 2 under balanced traffic Figure 6. Mean delay for T-CONT types 3 and 4 under balanced traffic 212 Copyright c 215 SERSC
Figure 7. Delay variance for T-CONT type 2 under balanced traffic Figure 8. Delay variance for T-CONT types 3 and 4 under balanced traffic Now we consider unbalanced traffic, which is more realistic than balanced traffic. For ONU i, i N/2, its input load rate is fixed to.4. Increasing the input load rates of ONU i, i > N/2, from.1 to.99, we evaluate performance of each algorithm. Note that each T- CONT type of each ONU has an identical load fraction although traffic is unbalanced. For each plot point, simulation is performed until the total number of packets transmitted by ONUs exceeds 1 9 for each algorithm. Copyright c 215 SERSC 213
Figure 9 shows mean delay performance for T-CONT type 2 for unbalanced traffic. The mean delays for T-CONT types 3 and 4 are delineated in Figure 1. Figure 11 shows a delay variance for T-CONT type 2. The delay variances for T-CONT types 3 and 4 are delineated in Figure 12. As we can see from Figures, the proposed method is better than the IACG method in T-CONT type 2, while it is worse than the IACG method in T- CONT types 3 and 4 under unbalanced traffic. Figure 9. Mean delay for T-CONT type 2 under unbalanced traffic Figure 1. Mean delay for T-CONT types 3 and 4 under unbalanced traffic 214 Copyright c 215 SERSC
Figure 11. Delay variance for T-CONT type 2 under unbalanced traffic Figure 12. Delay variance for T-CONT types 3 and 4 under balanced traffic 4. Conclusions This paper proposes a new scheme for dynamic bandwidth allocation for XG- PON. In the propose scheme, an ONU does not explicitly report its queue status to an OLT. Instead, the OLT monitors the bandwidth usage of each ONU to estimate the queue status of each ONU. The OLT allocates a new bandwidth to an ONU when a previous allocated bandwidth has been fully used. Also, the OLT Copyright c 215 SERSC 215
periodically allocates a probe bandwidth to an ONU to check the queue status of the ONU. Using simulations, we compare the proposed scheme with the existing method under balanced and unbalanced traffic. Acknowledgements This paper is a revised and expanded version of a paper entitled Dynamic Bandwidth Allocation Algorithm for XG-PON with Traffic Monitoring and Intra- Scheduling presented at Workshop on Networking and Communication 215 Seventh, Republic of Korea, 19-21 August, 215. References [1] ITU-T Rec. G.984.1, Gigabit-capable passive optical networks (GPON): General characteristics, (28) [2] ITU-T Rec. G.984.3, Gigabit-capable passive optical networks (G-PON): Transmission convergence layer specification, (28) [3] ITU-T Rec. G.987.1, 1 Gigabit-capable passive optical networks (XG-PON): General requirements, (21) [4] ITU-T Rec. G.987.3 Rev.2, 1- Gigabit-capable passive optical networks (XG-PON): Transmission convergence (TC) specifications, (21) [5] M. S. Han, Pipelined Architecture of Dynamic Bandwidth Allocation for Energy Efficiency in XG-PON, Contemporary Engineering Sciences, 7, 24 (214) [6] Man Soo Han, Hark Yoo, and Dong Soo Lee, Development of Efficient Dynamic Bandwidth Allocation Algorithm for XGPON, ETRI Journal, 35, 1 (213) [7] A. Dixit, B. Lannoo, D. Colle, M. Pickavert, and P. Demeester, Dynamic Bandwidth Allocation with Optimal Wavelength Switching in TWDM-PONs, 15th International Conference on Transparent Optical Networks (ICTON), (213) [8] M. S. Han, Performance Evaluation of Dynamic Bandwidth Allocation Algorithm for TWDM PON, Journal of Convergence Information Technology, 8, 13 (213) [9] Man Soo Han, Hark Yoo, Bin-Young Yoon, Bongtae Kim, and Jai-Sang Koh, Efficient Dynamic Bandwidth Allocation for FSAN-Compliant GPON, OSA Journal of Optical Networks, 7, 8 (28) [1] M. S. Han, Dynamic Bandwidth Allocation Algorithm for XG-PON with Traffic Monitoring and Intra- Scheduling, Advanced Science and Technology Letters, 18, (215) 216 Copyright c 215 SERSC