Here is the abstract you requested from the SysPack_2011 technical program page. This is the original abstract submitted by the author. Any changes to the technical content of the final manuscript published by IMAPS or the presentation that is given during the event is done by the author, not IMAPS.
|Achieving Efficiency through Concurrent Design of Hardware and Facility|
|Keywords: Design , Hardware , Facebook|
|Earlier this year, Facebook's first data center that is built and owned by the company began serving millions of Facebook users from Prineville, Oregon. This facility has no chiller plants like the conventional data centers and uses direct evaporative cooling. With in-built mechanical penthouse employing evaporative cooling and humidification systems as well as fan-walls, the data halls have ductless supply and return. Apart from the chiller-less cooling scheme, Facebook's data center employs a different electrical distribution from the conventional data centers. A non-PDU design with 480/277V at cabinet is backed up by a custom DC-UPS. The data center houses numerous servers that were designed and custom built by Facebook in conjunction with data center design. The custom server (277VAC/48VDC) with front access has helped to cut down the distribution losses and improve the serviceability. Facebook has shared the specifications of hardware and data center mechanical/electrical systems with the industry as a part of its “OpenCompute Project”. The OpenCompute project is aimed at creating an open environment where servers and data centers can be developed in a way similar to open source software projects. The presentation will focus on design details of the data center and the custom hardware. Different aspects of deployment / operation will also be discussed.|
|Veerendra P. Mulay, Ph.D., Thermal Engineer / Hardware Design
Palo Alto, CA