AI Lab logo
menu MENU

CSE Seminar

Network Virtualization for Large Data Centers

Changhoon KimSenior Software Design EngineerMicrosoft
SHARE:

Data centers are the digital-era analogue of factories and have become a vital infrastructure of

online service providers and enterprises. The golden rule of designing and operating a data center

is maximizing the amount of useful work per dollar spent. To meet this goal, the most desirable

technical feature is agility"”the ability to assign any computing resources to any tenants any time.

Anything less inevitably results in stranded resources and poor performance perceived by data-
center users.

In this talk, I first show why conventional networks specifically designed for large data centers

inhibit, rather than facilitate, agility. Location-dependent addressing, huge and unpredictable

performance variances, and poor data- and control-plane scalability are the main culprits. Then

I put forward network virtualization as a key architectural principle that eliminates all these

constraints in the first place, hence ensuring agility at scale. The gist of my network virtualization

architecture is a huge-switch abstraction: an imaginary switch that can host as many servers as

customers ask for, offers predictably and uniformly high capacity between any servers under

any traffic patterns, and yet appears to be dedicated to each individual customer. With this clean,

familiar, and yet powerful abstraction, datacenter providers and tenants can simply stop worrying

about any performance, reachability, isolation, and addressing problems that can happen in a

shared network hosting various unpredictable and even hostile workloads. Then I explain how

I turn this high-level abstraction into an operational system that virtualizes mega data-center

networks running real-world cloud services. In particular, I show how my specific designs uniquely

take advantage of a few critical opportunities and recent technical trends that have become

available in data centers, ranging from the power of a software switch present in every hypervisor,

to the principle of separating network state from host state, and to the availability of commodity

networking chips.
Changhoon Kim works at Windows Azure, Microsoft's cloud-service division, and leads research

and engineering projects on the architecture, performance, management, and operation of

datacenter and enterprise networks. His research themes span network virtualization, Big-data

processing platform, programmable networks, self-configuring networks, and debugging and

diagnosis of large-scale distributed systems. Changhoon received Ph.D. from Princeton University

in 2009, where he worked with Prof. Jennifer Rexford. Many of his research outcomes (including

SEATTLE, VL2, VNet, Seawall, EyeQ, and the relay-routing technology for VPNs) are either directly

adopted by production service providers or under review by standard bodies, such as IETF. In

particular, his VL2 work was published in the Research Highlights section of the Communications

of the ACM (CACM) as an invited paper, which the editors recognize as "one of the most important

research results published in CS in recent years" . He is the recipient of Rockstar Award 2013, an

annual recognition for the strongest individual networking contributions Microsoft-wide.

Sponsored by

CSE