I worked in Cornell Network Research Group under direction of Prof. Srinivasan Keshav. Currently, I'm visiting AT&T Labs (Florham Park, NJ) working with Balachander Krishnamurthy. My research interests are computer networking (e.g. Network-aware clustering and its applications, Web performance, content distribution, BGP routing, differentiated services, QoS routing, network dynamics and performance measurement, Internet computing and other Internet related research topic), distributed systems, multimedia systems and computer graphics. |
Current Work
Previous Work
Abstract Being able to identify the groups of clients that are responsible for a significant portion of a Web site's requests can be helpful to both the Web site and the clients. In a Web application, it is beneficial to move content closer to groups of clients that are responsible for large subsets of requests to an origin server. We introduce clusters---a grouping of clients that are close together topologically and likely to be under common administrative control. We identify clusters using a "network-aware" method, based on information available from BGP routing table snapshots. Experimental results show that our entirely automated approach is able to identify clusters for 99.9% of the clients in a wide variety of Web server logs. Sampled validation results show that the identified clusters meet the proposed validation tests in over 90% of the cases. An efficient self-corrective mechanism increases the applicability and accuracy of our initial approach and makes it adaptive to network dynamics. In addition to being able to detect unusual access patterns made by spiders and (suspected) proxies, our proposed method is useful for content distribution and proxy positioning, and applicable to other problems such as server replication and network management.
The
Implication of Network Performance on Service Quality
This is a joint work with Yu
Zhang and Prof. S.
Keshav. A paper has been submitted
for publication. The detailed version of the paper is available as Technical
Report TR99-1754, Department of Computer Science, Cornell University,
June 29, 1999.
Abstract As the Internet infrastructure evolves to include Quality of Service (QoS), it is necessary to map application quality requirements to network performance specifications in terms of delay and loss rate. In this paper, we first propose a new testbed by using a combination of simulation and emulation technics. Next, we use this testbed in studying the effect of network performance metrics on the perceptual quality of various applications to demonstrate the applicability of the testbed. We examine two typical applications on the basis the new testbed: the World Wide Web and the Internet Telephony. In our study of the World Wide Web, we derived a new TCP short connection model that computes the latency of Web retrieval accurately and efficiently given only the packet delay and loss rate characteristics apriori. In the study of Internet telephony, we show that the packet delay variance is the dominant network characteristics which affect the perceptual quality. As a result, the service quality drops dramatically when the network offered load reaches 80%. This can serve as a guideline for studies towards improving service quality of Internet telephony.
Abstract The Internet is increasingly being called upon to provide different levels of service to different applications and users. A practical problem in doing so is that although Ethernet is one of the hops for nearly all communication in the Internet, it does not provide any QoS guarantees. A natural question, therefore, is the effect of offered load on Ethernet throughput and delay. In this paper, we present several techniques for accurately and efficiently modeling the behavior of a heavily loaded Ethernet link. We first present a distributed approach to exact simulation of Ethernet, which greatly simplifies collision detection. Then, we describe an efficient distributed simulation model, called Fast Ethernet Simulation, that empirically models an Ethernet link to quickly and accurately simulate it. By eliminating the implementation of CSMA/CD protocol, our approach reduces computational complexity drastically while still maintaining desirable accuracy. Performance results show that our techniques not only add very little overhead (less than 5% in our tests) to the basic cost of simulating an Ethernet link, but also closely match real-world measurements. We also present efficient techniques for compressing cumulative distributions using hyperbolic curves and for monitoring the load on a heavily-loaded link.
Abstract Automatically detecting and correcting disoriented image frames in a content-related image sequence is a problem that must be addressed in many image and video applications. In this paper, we give a robust feature-based algorithm to efficiently detect and adjust the disorientation of images.
Yesterday is a memory,
Tomorrow is a dream, Today is the reality, Make the most of it. |