- Open Access
- Total Downloads : 258
- Authors : Oladeji F. A, Onadokun I. O, Oyetunji M. O
- Paper ID : IJERTV2IS60233
- Volume & Issue : Volume 02, Issue 06 (June 2013)
- Published (First Online): 10-06-2013
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Evaluating Buffering Schemes For Next-Generation Internet QOS Router
By Oladeji F. A
Department of Computer Sciences, University of Lagos, Lagos
Onadokun I. O
Centre for Information Technology and Management, Yaba College of Technology, Lagos
Oyetunji M. O
Bowen University, Iwo, Oyo State
Abstract
This paper presents performance evaluation of buffer management schemes for routers in Differentiated Services domain. Two variants of Random Early Detection (RED) proposed for multi- queue, traffic-penalizing network were considered. These are RED In-profile-Out- of-profile coupled (RIO-C) and RED In- profile-Out-of-profile Decoupled (RIO-D). In order to implement these variants, weighted round robin scheduling discipline was employed. The experiments were done using n2. Two physical queues were maintained at the core router holding UDP traffics generated by constant bit rate traffic agent. Simulations were run using different RED thresholds for virtual queues. Analysis of traced data revealed that RIO-C has the lower loss rate 0% than RIO-D (0.08%) for in-profile traffics. On the other hand, in terms of which scheme really applies penalty to violating traffic sources, RIO-D proves better. The study therefore concludes that if QoS is to be supported in the next generation Internet with strict adherence to traffic profiles, RIO-D could be dependent upon.
Keywords: Differentiated Service, Random Early Detection, RIO-C, RIO-D, Quality of Service
-
Introduction
Internet can be described as a network of networks linking computers that share the TCP/IP protocol suite. This protocol permits an efficient means of data exchange for applications that do not require firm performance guarantees. It offers a best-effort service by doing its best to deliver good services with no guarantee or assurance of Quality of Service (QoS). The service model is simple, low-cost and pushes much of the traffic management complexity to the end-systems.
New applications such as Multimedia applications, voice over IP (VoIP), e-business and other new services that are being routed on the Internet led to increase of traffics in the network. Apart from the traffic load, these applications required differential services for their packets rather than one- size-fits-all of the best-effort TCP/IP model. Efforts from the Internet Engineering Task Force to extend TCP/IP to accommodate these requirements led to the introduction of Differentiated Services (DiffServ) architecture in [1] which allows service providers to allocate different service qualities to different users of the Internet. It offers different levels of treatment to traffic users based on their requirements. These qualities are referred to as QoS and the TCP/IP that would incorporate these qualities would run on the next-generation Internet since the current QoS test- beds are running on peace-meal experimentations.
In order to implement DiffServ, traffic needs to be measured and buffered into service treatment groups called per-hop-behaviour [2]. The IETF approves up to 64 different service groups and each service group receives the same treatment on transit [3]. The next-generation-Internet is expected to follow the conditioning procedure of marking/shaping traffics at the edge systems and buffering/forwarding at the
core systems based on status of the marked code points. DiffServ also requires that a traffic source that violates the agreed profile should be penalised either by dropping the excess traffic or downgrading to low- level service treatment cadre [4]. This calls for an active queue management for the future Internet. In [5], Random Early Detection (RED) was recommended as an active queue management scheme and has been used in the Internet gateway since 1993. The need to implement differential buffering at the network core to support QoS and penalty-based forwarding led to the modification of RED to RED In- profile Out-of-profile (RIO) scheme. In RIO, a physical queue of traffic packets would consist of packets that are compliant with service level agreement (called in-profile) and those traffics that lead to violation of profile called Out-of-profile packets. Thus, as traffic arrives at a core router, they are moved to respective queues based on the code points attached by the traffic conditioning algorithm at the edge system.
This paper carried out a survey and simulation of variants of RED that were proposed to support differentiated services in order predict which one would actually provide better services for the future Internet. The rest of this paper is organized as follows: while section one introduces the issues on ground, section two summarizes related works on differential buffering schemes; section three gives the details of the experimentations (simulation setup) of some variants of RED. Section 4 analyses the traced data from the simulations and compares their performances using some useful metrics such as drop rate. Conclusions and recommendations were made in section 5.
-
Related Works
Over the years, various buffer management schemes have been developed by researchers in other to solve the traffic control and TCP congestion control problem. A buffer is the memory space in a network node used for temporary storage of packets before it is
network, (service level), the network determines the amount of traffic the source should inject into the network (service profile). This is termed service-level agreement between the traffic source and the protocol driven the network [4]
In order to implement DiffServ, packets are expected to be buffered into different service treatment queues. Researchers have proposed various buffering mechanisms to achieve this requirement but yet to be adopted in the Internet. Among such schemes are the use of RED In-Out profile [7], weighted RED [8] and Drop Tail algorithms. RED, according to [5], attempt to avoid global synchronisation whereby, useful packets are dropped when buffer is full. During incipient congestion, it sets two important thresholds, the maximum (maxth) and the minimum (minth) threshold respectively. RED [5] was designed to minimize loss of important packets, avoid global synchronization of packets, maintain high link utilization and remove biases against bursty sources. RED proceeds in this order when a packet arrives:
-
Calculate the average queue size.
-
Queuing up arriving packets only if the average queue size is below the minth threshold.
-
Depending on set packet drop probability (maxp), the packet is either dropped or queued if the average queue size is between the minth and maxth threshold.
-
The packet is automatically dropped if the average queue size is greater than the maxth threshold.
-
As shown Fig, 1 below, all traffic that arrives when
Buffer Size
max min
forwarded on a link. Buffer management scheme is the algorithm that determines which items are allowed
Drop all
Accept on
Accept all
into the buffer especially when the size is finite. For instance, when a switching device like a router is busy, the spaces for keeping incoming packets are buffers. The simplest buffering scheme is the drop-tail in which all other incoming packets are dropped when the buffer is full. While deliberating on how to implement an active buffer management scheme to support differentiated services in [6], it was suggested that buffers should not be allowed to get filled up before admission control is applied. To top the story, implementing QoS in a multi-queue platform calls for differential buffering of traffic packets. In such platform, there suppose to be an agreement with the sources of traffic and the network. While the traffic source specifies its desired quality of service from the
Figure 1. RED Logic for buffer management
the minimum threshold is not reached is globally accepted while in between the maximum and minimum thresholds, traffic acceptance into the buffer is subjected to a certain probability. This algorithm was incorporated to the TCP/IP in 1993 and the detail is in [5].
An attempt to accommodate the RED algorithm for DiffServ platform has led to the extension of the algorithm to RED in-profile and RED out-of-profile simply called RIO scheme in [7]. In RIO, packets marked by the edge routine with the
same code points are meant to receive the same treatment from the network but a marginal penalty exist among these packets when its source exceeds the agreed profile. Those packets after the agreed committed information rate are considered out of profile even though they have the same code points with packets within the committed rate. Such out-of profile packets are buffered differently. These internal queues for the packets of the same code points are called virtual queues. Literature also revealed two approaches to the use of RIO buffering scheme called RIO-Coupled (RIO-C) and RIO-Decoupled (RIO-D) [9].
In RIO-C, the probability of dropping an out- of-profile packet in a physical queue is based on the average queue lengths of all virtual queues while the probability of dropping an in-profile packet is based solely on the weighted average length of its virtual queues. RIO-C derives its name from the coupled
3.0 Experimentations with RIO-C and RIO-D
Buffering mechanisms are best assessed with levels of strictness of its algorithm and the loss rate. In this report, algorithms of RIO-C and RIO-D are simulated using the differentiated services platform of network simulator 2. The topology used is as shown in Fig. 2. Two UDP sources (S1 and S2) are configured to send traffic to the same destination D through edge E1, core C1 and edge E2.
Capacity: 10Mbps: Propagation Delay: 5ms
S
relationship of the average queue calculation. In the case of RIO-D, the probability of dropping an out- profile packet in a physical queue is based on the size of its virtual queue. RIO-D calculates the average queue for packets of each virtual queue independently For example, the average queue length for green,
yellow, and red packets will be calculated using the S
E C E
D
Capacity: 5Mbps: Propagation Delay: 5ms
number of green, yellow and red packets in the respective queues. The strictness or wildness of RED depends on the parameter settings for each queue.
Other differential services buffering scheme is the use of Weighted RED (WRED). WRED is an extension to RED with congestion avoidance capabilities where different queues may have different buffer occupation thresholds before random dropping starts and different dropping probabilities based on a single queue length [8]. The scheduling policy used to pick packets from a multi-queuing platform also determines the strength of the buffer management routine. Packet scheduling is the process of choosing which packets stored in a buffer should be transmitted over a specified link. The choice must be taken in a very small period of time in relation to the packet transmission time. Given higher priority to certain queue is at the expense of queues .i.e. giving more benefits to one service queue penalizes other queues because their packets wait longer to be serviced. This paper considered the use of Weighted Round Robin (WRR) as the scheduling policy. WRR is designed as an extension of round robin discipline to differentiate the quanta of bandwidths reserved for service queues. WRR serves a number of packets from a service queue based on its service quantum or weight.
WRR performance is accepted by researchers because it makes sure that all service queues have access to at least some configured amount of network bandwidth to avoid bandwidth starvation [10. 11, 12]
Figure 2. Simulation topology
According to DiffServ architecture [1], edge router conditions and classifies packets using associated differentiated service code point and agreed traffic profile while core router only buffers and schedules packet based on the setting of the edge router. In the above settings, queues are built at both edge E1and core C1 facilities because packets arrival rate to them exceeds the available bandwidth. The network link is connected with a 10Mb bandwidth with a link propagation delay of 5ms. From the core router C1 to the edge E2, the bandwidth is set to 5Mb so as to allow burstiness in traffic and to study the effect of congestions at the core router C1. The UDP flow is generated using constant bit rate traffic generator.
Edge devices in ns 2 classify, police and mark packets based on associated packet code points. Token bucket meter/policer is used for traffic conditioning. The meter is used to monitor the sending rate of each source and determine whether a packet is in-profile or out-of-profiles and mark it accordingly. Two experiments were conducted; one for RIO-C and the other for RIO-D. Traffics from edge E1 to core C1 were grouped into two queues (physical queues), each having two virtual queues (precedence levels). The buffer mechanisms used were the RIO-C and RIO-D with the queue configuration:
$qE1C configQ 0 0 20 40 0.02
$qE1C configQ 0 1 10 20 0.10
The scripts configured physical queue 0, virtual queue 0 (in-profile) to 20, 40 and 0.02 minimum packets, maximum packets and maximum
probability thresholds (RED maxp) respectively. While traffics in physical queue 0 and virtual queue 1 (out- of-profile) used 10, 20 and 0.10 minimum packets, maximum packets and maximum probability respectively. The same settings were used at the core C1 of RIO-D.
3.1 Performance Metrics
The only parameter this report used to assess the buffering schemes is the packet drop rate or loss rate. Packet loss rate is the fraction (%) of packets that arrived to a given destination during an interval of time for which no acknowledgement is never received to the total number of packets that were sent [13]. Such packets are referred to as being lost on transit. i.e.
Packet Loss Rate % =
NL *100
NA
(1)
Figure3. The Strictness Analysis of RIO-C and RIO-D
where NL and NA are the numbers of lost packets and of total arrived packets, respectively. TCP/IP uses the fraction of lost packets to gauge its transmission rate: if the fraction becomes large then the transmitting host will reduce the rate at which it injects packets to the network [14].
4.0 Results and Discussion of Results
After running the simulation, using the experimental setup describe in previous section, events of what happened between the core C1 link to edge E2 link were traced into a file. On analysing this file, the following results were obtained for packets drop (loss) rate statistics.
RED Variants |
WRR (SCHEDULER) |
|
General drop% |
RED/Early drop% |
|
RIO-C |
30.28 |
7.16 |
RIO-D |
29.27 |
8.15 |
RED Variants |
WRR (SCHEDULER) |
|
General drop% |
RED/Early drop% |
|
RIO-C |
30.28 |
7.16 |
RIO-D |
29.27 |
8.15 |
Table 1. Overall packet loss rate for RIO-C and RIO-D
Table 2. The Strictness Analysis of RIO-C and RIO-D
Examining the results in Table 2 and Figure 3, out of 2508 packets that arrived to the core router physical quee 0, no complaint packet was dropped (0% loss rate) while 231 (3.08% loss rate) violating packets were dropped using RIO-C buffer scheme. Comparing these values with that of RIO-D scheme, out of 2508 packets that arrived to the core router, physical queue 0 have 2 loss packets (0.08% loss rate) and 391lost packets (5.22% loss rate) for compliant and non-compliant packets respectively. Similarly, out of 2508 packets that arrived to the core router physical queue 1, no complaint packet was dropped (0% loss rate) while 1168 (15.60% loss rate) violating packets were dropped using RIO-C buffer scheme. Comparing these values with that of RIO-D scheme, out of 2508 packets that arrived to the core router physical queue 1 have 0 lost packet (0% loss rate) and 1165 (15.56% loss rate) for compliant and non-compliant packets respectively. This is shown in Figure 3 above.
In all, the study found that in terms of strictness to penalize violating traffic sources, RIO-D proves better. The study therefore concludes that if QoS is to be supported in the next generation Internet with high strictness to traffic profile, RIO-D could be of higher advantage to law abiding sources.
5.0 Conclusion
RED Varian ts |
WRR (SCHEDULER)/RED Early Drop |
|||
Physical Queue 0 (Loss Rate) |
Physical Queue 1(loss rate) |
|||
Complia nt |
Non- Complia nt |
Complia nt |
Non- Complia nt |
|
RIO-C |
0 |
3.08 |
0 |
15.60 |
RIO-D |
0.08 |
5.22 |
0 |
15.56 |
RED Varian ts |
WRR (SCHEDULER)/RED Early Drop |
|||
Physical Queue 0 (Loss Rate) |
Physical Queue 1(loss rate) |
|||
Complia nt |
Non- Complia nt |
Complia nt |
Non- Complia nt |
|
RIO-C |
0 |
3.08 |
0 |
15.60 |
RIO-D |
0.08 |
5.22 |
0 |
15.56 |
This paper has studied the performance of the variants of RED; the RIO-C and RIO-D proposed to be used in managing limited buffer in differentiated services network domain to make the future TCP/IP protocol a full-fledged quality of service provider. Performance indicator used in the study is strictness RED with respect to packet drop rates of the two schemes. Simulation runs were conducted using ns 2, network simulator 2 and algorithms of RED in-profile and out-of-profiles were implemented. The analysis of the traced data from the simulation shows that RIO-D proved best in enforcing compliance in traffic management on the network. And in terms of general dropping rate, RIO-C is better. Therefore, if QoS is to
be supported in the next generation Internet, RIO-D would be of advantage to sources that are TCP/IP friendly.
6.0 REFERENCES
-
Blake S. Black D. Carlson M, Wang Z. and
Weiss W. (1998) An Architecture for Differentiated Services, IETF Draft, RFC 2474
-
Jacobson V., Nichols K. and Poduri K. (1999) An Expedited Forwarding Per Hop Behaviour, IETF Draft RFC 2598
-
K. Nichols et al., Differentiated Services Operational Mode1 and Definitions, Internet draft
<draft- nichols-dsopdcf-00.txt>, Feb. 1998
-
Park, L-T., Baek, J-W., and Hong, J. (2001). Management of service level agreements for Multimedia Internet service using a utility model. IEEE Commun. Mag.
-
Floyd S and Jacobson V (1993), Random Early Detection Gateway in Congestion Avoidance, IEEE/ACM Transactions on Networking.
-
Braden R. and Clark D. (1998) Recommendations on Active Queue Management and Avoidance in the Internet, Internet Draft, RFC 2309
-
D. Clark and J. Wroclawski (!997) An Approach to Service Allocation in the Internet, Internet draft < draft-clack-different-svc-aIloc. 00.txt>, July 1997 [8]http://www.cisco.com/univercd/cc/td/doc/product/ software/ios120/12cgcr/qos_c/qcpart3/qcwred.htm.
-
Elloumi O, Snodder D and Pauwels K,(199) Usefulness of three drop precedence in Assured Forwarding service, IETF Draft, <draft -elloumi- diffserv-threevstwo-00.txt> July
-
Seddigh N, Nandy B, Pieda P, Hadi Salim J, and Chapman A,(1998) An experimental study of Assured services in a Diffserv IP QoS Network, Proceedings of SPIE symposium on QoS issues related to The internet, Boston, November.
-
Luciano L, Enzo M and Giovanni S. (2004)
Tradeoffs
between low complexity, low latency, and fairness
with
Deficit Round Robin Schedulers, IEEE/ACM on Networking, Vol. 12(4), pp. 375-385
-
Hideyuki S., Makiko Y. Ruixne F., and Hiroshi
S. (1997) An improvement of WRR cell in scheduling in ATM Networks, IEEE 1997.
-
Mieghem P.(2006) Performance Analysis of Communication Networks and Systems Cambridge University Press, New York
-
W. Stevens, "TCP Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery Algorithms", RFC 2001, January 1997.