MASCOTS 2011

Information

Call For Papers
Important Dates
Venue
» Venue Plan/Map
Visa/Local Information
Accommodation
Sponsorship
Questions

Registration
Program

Keynotes
Technical Program
» Program Details
Best Paper Candidate

Committees

Organizing Committee
Program Committee
Additional Reviewers

Sponsors


IEEE
IEEE Computer Society
Technical Committee on Computer Communications (TCCC)

Organized By

www.ntu.edu.sg

Past Editions

MASCOTS 2010
MASCOTS 2009
MASCOTS 2008

 

Nanyang Technological University
School of Computer Engineering
Parallel Distributed Computing Centre (PDCC)
Block N4-B2A-03
50 Nanyang Avenue, Singapore 639798
©2010 All rights reserved NTU - PDCC

Keynotes

Challenges of Virtualized System: Performance Point of View

Hai Jin

Hai Jin
Professor
Dean, School of Computer Science and Technology
Director, Cluster and Grid Computing Lab
Director, Services Computing Technology and System Lab
Huazhong University of Science and Technology, China
Email: hjin@hust.edu.cn
http://grid.hust.edu.cn/hjin/

Abstract: Virtualization is a rapidly evolving technology that provides a range of benefits to computing systems, such as improved resource utilization and management, application isolation and portability, and system reliability. Among these features, live migration, resources management (including vCPU scheduling and I/O management) are core functions. Live migration of virtual machine (VM) provides a significant benefit for virtual server mobility without disrupting service. It has become an extremely powerful tool for system management in a variety of key scenarios, such as VM load balancing, fault tolerance, power management and other applications. Experimentations and traces show that the performance of live migration is not good enough for different applications. Of course, based on the virtualization architecture management schemes for CPU and I/O resources also need to be reconsidered when supporting different applications with different workload.
In this talk, some typical issues in virtualized system will be discussed. Firstly, to take into account the migration overhead in migration decision making, we thoroughly analyze the key parameters that affect the migration cost from theory to practice, and construct two application-oblivious models for the cost prediction by using learned knowledge about the workloads at the hypervisor (also called VMM) level. We evaluate the models using five representative workloads on a Xen virtualized environment. Based on the model, we have proposed two live migration schemes for different scenarios: a novel approach that adopts checkpointing/recovery and trace/replay technology to provide fast, transparent VM migration for applications with high reliability, and a memory-compression-based VM migration system for normal applications. Secondly, the asynchronous-synchronous disk I/O model in a typical virtualized system exhibits several problems. For example, when the frontend fails abruptly, the unsaved data in the frontend’s cache will be lost. To address the problems, we introduce a new I/O model. In this model, rather than performing the asynchronous-synchronous operations for an asynchronous I/O write request, the frontend file system uses synchronous operations to deal with the I/O request and the backend file system performs asynchronous operations to write the data to the hard disk. A prototype system called HypeGear is implemented on the Xen hypervisor. Thirdly, VMM schedulers have focused on fairly sharing the processor resources among domains, rarely consider VCPUs' behaviors. However, this can result in poor application performance to overcommitted domains if there are concurrent programs hosted in them. We review the properties of both Xen's Credit and SEDF schedulers, and show how these schedulers may seriously impact the performance of the communication-intensive and I/O-intensive concurrent applications in overcommitted domains. A novel approach, that dynamically scales the context switching-frequency by selecting variable time slices according to VCPUs` behaviors, is then proposed to improve the Credit scheduler more adaptive for concurrent applications.

Biography: Hai Jin is a Cheung Kung Scholars Chair Professor of computer science and engineering at the Huazhong University of Science and Technology (HUST) in China. He is now Dean of the School of Computer Science and Technology at HUST. Jin received his PhD in computer engineering from HUST in 1994. In 1996, he was awarded a German Academic Exchange Service fellowship to visit the Technical University of Chemnitz in Germany. Jin worked at The University of Hong Kong between 1998 and 2000, and as a visiting scholar at the University of Southern California between 1999 and 2000. He was awarded Excellent Youth Award from the National Science Foundation of China in 2001. Jin is the chief scientist of ChinaGrid, the largest grid computing project in China, and the chief scientist of National 973 Basic Research Program Project of Virtualization Technology of Computing System.
Jin is a senior member of the IEEE and a member of the ACM. Jin is the member of Grid Forum Steering Group (GFSG). He has co-authored 15 books and published over 400 research papers. His research interests include computer architecture, virtualization technology, cluster computing and grid computing, peer-to-peer computing, network storage, and network security

Jin is the steering committee chair of International Conference on Grid and Pervasive Computing (GPC), Asia-Pacific Services Computing Conference (APSCC), International Conference on Frontier of Computer Science and Technology (FCST), and Annual ChinaGrid Conference. Jin is a member of the steering committee of the IEEE/ACM International Symposium on Cluster Computing and the Grid (CCGrid), the IFIP International Conference on Network and Parallel Computing (NPC), and the International Conference on Grid and Cooperative Computing (GCC), International Conference on Autonomic and Trusted Computing (ATC), International Conference on Ubiquitous Intelligence and Computing (UIC).

 

Massive-Scale Parallel Network Simulations, Past, Present and Future

George Riley

George Riley
Associate Professor
Electrical and Computer Engineering
Georgia Institute of Technology, USA
riley@ece.gatech.edu

Abstract: Discrete event simulation tools for analyzing performance of computer networks have been available for decades, dating back to the early days of the venerable ns-2, continuing through GTNetS, SSFNet, ROSSNet, and most recently ns-3. At each step along the way various developers and researchers have reported on "large-scale" simulation experiments using these tools. As the available hardware platforms grow in scale, the scale of network simulation experiments have grown similarly. In this talk, we will discuss the various reported "large-scale" or "massive-scale" experiments, the approach used to achieve the larger scale and the drawbacks of the experiments. Finally, we will try to look a bit in to the future to see where this field might be in the coming years.

Biography: George Riley is an Associate Professor of Electrical and Computer Engineering at the Georgia Institute of Technology. He received his Ph.D. in computer science from the Georgia Institute of Technology, College of Computing, in August 2001. His research interests are large--scale simulation using distributed simulation methods. He is the developer of Parallel/Distributed ns2 (pdns), the Georgia Tech Network Simulator (GTNetS), and is co-PI on the ns3 development effort.

Before turning to a career in academia, Dr. Riley spent 20 years as an independent consultant and business owner, primarily working at the Air Force Eastern Test Range, developing and deploying systems for real-time missile launch support.

 

© All Rights Reserved: NTU - PDCC   Last Update: 19-July-2011
Main Page
Drupal 6 Appliance - Powered by TurnKey Linux