Buffer Overflows of Merging Streams

March 26, 2003
2:50 pm - 4:00 pm
Halligan 111

Abstract

We consider a network merging streams of packets with different quality of service (QoS) levels, where packets are transported from input links to output links via multiple merge stages. Each merge node is equipped with a finite buffer, and since the bandwidth of a link outgoing from a merge node is in general smaller than the sum of incoming bandwidths, overflows may occur. QoS is modeled by assigning a positive value to each packet, and the goal of the system is to maximize the total value of packets transmitted on the output links. We assume that each buffer runs an independent local scheduling policy, and analyze two special cases: FIFO policies, that must deliver packets in the order they were received, and bounded-delay policies, that must deliver each packet by its prescribed deadline. For the FIFO model, we show that a simple local on-line algorithm does essentially as well as the combination of locally optimal (off- line) schedules. We identify the delay of a buffer, defined as the ratio between the size of the buffer and its drain rate, as a key quantity in our analysis. In particular, we prove, for any epsilon>0, a competitive factor of O(1/epsilon) for tree systems where buffer delays grow in each level toward the output by a factor of 1+epsilon. This means that if buffers grow toward the output links, or if link bandwidths grow toward the input links, then competitive scheduling algorithms are feasible. For the bounded delay model, we prove that a simple on-line algorithm is competitive on tree topologies. Joint work with Alex Kesselman, Zvi Lotker and Yishay Mansour.