IBM and ETH Zurich Unveil Plan to Build New Kind of Water-cooled Supercomputer
Direct reuse of waste heat. Aims to cut energy consumption by 40% and carbon-dioxide emissions by up to 85%
Making computing systems and data centers energy-efficient is a staggering undertaking. In fact, up to 50% percent of an average air-cooled data center’s carbon footprint or energy consumption today is not caused by computing but by powering the necessary cooling systems to keep the processors from overheating—a situation that is far from optimal when looking at energy efficiency from a holistic perspective.
“Energy is arguably the number one challenge humanity will be facing in the 21st century. We cannot afford anymore to design computer systems based on the criterion of computational speed and performance alone”, explains Prof. D. Poulikakos of ETH Zurich, head of the Laboratory of Thermodynamics in Emerging Technologies and lead investigator of this interdisciplinary project.
”The new target must be high performance and low net power consumption supercomputers and data centers. This means liquid cooling.”
With an innovative water-cooling system and direct heat reuse, Aquasar—the new supercomputer, which will be located at the ETH Zurich and is planned to start operation in 2010, will reduce overall energy consumption by 40%. The system is based on long-term joint research collaboration of ETH and IBM scientists in the field of chip-level water-cooling, as well as on a concept for “water-cooled data centers with direct energy re-use” advanced by scientists at IBM’s Zurich Lab.
The water-cooled supercomputer will consist of two IBM BladeCenter® servers in one rack and will have a peak performance of about 10 Teraflops. 2
Each of the blades will be equipped with a microscale high-performance liquid cooler per processor, as well as input and output pipeline networks and connections, which allow each blade to be connected and disconnected easily to the entire system (see image).
Water as a coolant has the ability to capture heat about 4,000 times more efficiently than air, and its heat-transporting properties are also far superior. Chip-level cooling with a water temperature of approximately 60°C is sufficient to keep the chip at operating temperatures well below the maximally allowed 85°C. The high input temperature of the coolant results in an even higher-grade heat as an output, which in this case will be about 65°C.
The pipelines from the individual blades link to the larger network of the server rack, which in turn are connected to the main water transportation network. The water-cooled supercomputer will require about 10 liters of water for cooling, and a pump ensures a flow rate of roughly 30 liters per minute. The entire cooling system is a closed circuit: the cooling water is heated constantly by the chips and consequently cooled to the required temperature as it passes through a passive heat exchanger, thus delivering the removed heat directly to the heating system of the university in this experimental phase. This eliminates the need for today’s energy-hungry chillers.
“Heat is a valuable commodity that we rely on and pay dearly for in our everyday lives. If we capture and transport the waste heat from the active components in a computer system as efficiently as possible, we can reuse it as a resource, thus saving energy and lowering carbon emissions. This project is a significant step towards energy-aware, emission-free computing and data centers,” explains Dr. Bruno Michel, Manager Advanced Thermal Packaging at IBM’s Zurich Research Laboratory.
Three-year collaborative research in emission-free high performance computing
From the industrial side, the project is part of IBM's First-Of-A-Kind program (FOAK), which engages IBM's scientists with clients to explore and pilot emerging technologies that address real world business problems. It was made possible by the support of IBM Switzerland and the IBM Research and Development Laboratory in Boeblingen, Germany.
This liquid cooled supercomputer research is planned as a three-year collaborative research program called Direct Re-Use of Waste Heat from Liquid-Cooled Supercomputers: Towards Low Power, High Performance, Zero-Emission Computing and Datacenters, which is funded jointly mainly by IBM, ETH Zurich and the Swiss Competence Center for Energy and Mobility (CCEM). Part of the system will be devoted to further research into cooling technologies and efficiencies by scientists of ETH Zurich, ETH Lausanne, the Swiss Competence Center for Energy and Mobility, and the IBM Zurich Research Lab.
The computational performance of Aquasar is a very important part of the research. Aquasar will be employed by the Computational Science and Engineering Lab of the Computer Science Department at ETH Zurich, for multiscale flow simulations pertaining to problems encountered at the interface of nanotechnology and fluid dynamics. Researchers from this laboratory will also optimize the efficiency with which the respective algorithms perform within the system, in collaboration with the IBM Zurich Lab. These activities will be supplemented with algorithms of other research labs participating in the project. With this supercomputer system, scientists intend to demonstrate that the ability to solve important scientific problems efficiently, does not need to have an adverse effect on the energy and environmental challenges facing humanity.
1 By making use of a physical carbon offset that fulfills criteria set forth in the Kyoto Protocol. The estimate of 30 tons CO2 is based on the assumptions of average yearly operation of the system and the energy for heating the buildings being produced by fossil fuels.
2 BladeCenter® servers with a mixed population of QS22 IBM PowerXCell 8i processors as well as HS22 with Intel Nehalem processor. In addition, a third air-cooled IBM BladeCenter® server will be implemented to serve as a reference system for measurements. Please note, all numbers provided in the release are estimates and refer to the water-cooled IBM BladeCenter® servers.
[attachment "ZED_FOAK f.doc" deleted by Hans-Juergen Rehm/Germany/IBM]
IBM Exascale Computing Project
How a million trillion calculations per second will change the world
On June 19, 2009, IBM announced its intent to achieve the next 'moon-shot' in the evolution of the world's most powerful computers – an 'exaflop' system with nearly unimaginable power.
What is Exa-Scale Computing?
An exaflop is the next major speed barrier of supercomputing – the performance of one million trillion calculations in a single second by a single computer. One exaflop equals:
- The combined performance of 50 million laptops1 - enough to reach 1000 miles from the ground when stacked, weighing over 100,000 tons.
- 1000 times the power of today's most powerful supercomputer.
How will Exa-Scale Will Change the World?
While the magnitude of an exascale supercomputer's power is difficult to imagine, the uses for such a system are already clear to IBM scientists as well as global leaders of businesses and governments.
More than Doubling the World's Oil Reserves
Today's oil recovery techniques – the finding and drilling of oil deposits – have a success rate of only 30 percent. Tomorrow's exascale computing can predict with incredible accuracy the location of oil deposits, increasing those recovery rates to as high as 70 percent.
Predicting and Fighting Pandemics in Real-Time
Today's supercomputers allow scientists to simulate incredibly complex biological functions – identifying the origin of diseases and discovering new treatments. However, these complex tasks can take weeks, even with the help of today's most powerful computers. In the hands of tomorrow's scientists, exascale systems can turn disease prediction, identification and cures in real time, allowing doctors to outrun the epidemics of tomorrow.
Modeling 100 Years of Climate Change in a Few Hours
Today's supercomputers can estimate climate change impact to Earth's environment 100 years into the future – performing calculations so intense that they require one month to complete. Tomorrow's exascale systems will reduce this time to only a few hours.
Real-Time Analysis of Oceans of Financial Services Data
Today's supercomputers – like those used by TD Securities – can speed advanced financial calculations by 2000 percent when compared to traditional methods. Tomorrow's exascale financial calculations will include real-time, intelligent analysis of important factors such as investor profile data, live market trading dynamics, RSS news feeds and social networks – helping control financial risk and provide more accurate valuations of assets and investments.
1 Based on typical configuration including Intel dual pro
Additional Information on Supercomputing
The performance of the most powerful supercomputers in the world increase by a factor of 1000 every 11 years. The first Gigaflop system was announced in 1986 (Cray2), the first Teraflop system arrived in 1997 (Intel ASCI Red) and Petaflop computing arrived in 2008. (IBM Roadrunner.) If these trends continue, and we have every reason to believe they will, we should see the first 100 petaflop system around 2016 and the fist Exascale system in about 10 years.
IBM Researchers are already working on Exascale computing. We believe there are several key areas critical to reaching Exascale computing. These include consistent industry leadership, developing generally applicable systems to solve a diverse set of clients, and a track record in innovation.
Consistent Industry Leadership
- From November 1999 to June 2009 – over the course of 20 TOP500 lists -- IBM systems have accounted for the vast majority (upwards of 40% in most cases) of the total computational horsepower of each year’s biannually issued TOP500 list. A record unmatched by any other HPC vendor.
- IBM has held the No.1 system for Five years straight, dominating 10 lists. A record. (The list of companies that have held the No.1 spot, incidentally, does not include Cray).
- IBM has had the No.1 system 12 times over the 16-year history of the TOP500 list. A record.
- IBM was the first to announce a petaflop system and is the only vendor to announce a 20 petaflop system (Sequoia.) No other vendor has announced a system, or plans to develop a system, faster than 2 petaflops. Sequoia represents more total compute power than all the systems in November's TOP500 list combined.
Generally Applicable Systems
IBM supercomputers sever the most diverse set of clients including governments, classified research, academia, weather, aerospace, design, financial services, health care, oil exploration and retail, among others.
IBM has the broadest portfolio of generally applicable technologies on the list to meet the needs of the widest possible customer base. (BlueGene, Power6, iDataplex, Cell, Sequoia technology). Cray offers a single system used basically by a single customer. HP is also limited by a single architecture.
IBM has a long history of innovation in supercomputing.
1944 - IBM introduces the Mark I, the first machine that could execute long computations automatically.
1954 - The IBM Naval Ordinance Research Calculator (NORC) was a one-of-a-kind first-generation vacuum tube computer built for the U.S. Navy Bureau of Ordinance.
1958 - The AN/FSQ-7 was built by IBM with the U.S. Air Force for command and control functions.
1961 - The IBM 7030, also known as Stretch, was used by Los Alamos and was the fastest computer in the world.
1994 - A massively parallel IBM SP2 at the Cornell Theory Center was the fastest general purpose computer in its day.
1997 - "Deep Blue" defeats Garry Kasparov to become the first machine to beat a human chess champion.
1999 - IBM researchers begin "Blue Gene" project.
2000 - IBM delivered ASIC White to the U.S. Energy Department, a system powerful enough at the time to process an Internet transaction for every person on Earth in less than a minute.
2002 - IBM announces plans to build ASCI Purple, the world's first supercomputer capable of up to 100 teraflops, more than twice as fast as the most powerful computer in existence at the time.
2004 - IBM Blue Gene supercomputer officially claimed the top spot as the world's most powerful supercomputer.
2005 - IBM launched the world's most powerful privately owned supercomputer, the Watson Blue Gene system, nicknamed BGW.
2006 - IBM collaborated with the NNSA and the U.S. Department of Energy Office of Science to share a five year, $58M R&D effort to further enhance the capabilities of supercomputing.
2007 - IBM announces Blue Gene/P – three times faster than its predecessor.
2008: IBM announces a 'hybrid' supercomputer to harness the immense power of the Cell Broadband Engine processor in conjunction with systems based on x86 processors from AMD. Codenamed Roadrunner, the first system will be installed in the U.S. Department of Energy's Los Alamos National Laboratory. This revolutionary supercomputer is capable of a peak performance of over 1 petaflop.
2009: IBM announces that the Department of Energy's National Nuclear Security Administration has selected Lawrence Livermore National Laboratory as the development site for a new supercomputer. Sequoia will be based on IBM BlueGene technology and will exceed 20 petaflops or 20 million billion calculations per second. That’s roughly equivalent to the combined computing power of 2,000,000 of today’s fastest laptop computers and exceeds the performance of the combined processing power of today’s top 500 supercomputers in the world. 2009: IBM announces the fastest supercomputer in Europe is a new Blue Gene system at Juelich Supercomputing Centers. This system, which will exceed one petaflop, includes a new water cooling system which uses room temperature water to cool the system reducing air conditioning requirements to cool the Blue Gene over just air cooling it by 91%.
- These innovations directly contribute to IBM ability to offer the most powerful and most energy efficient supercomputers.
- Take for example the two most powerful supercomputers in the world: IBM's Roadrunner and Cray's Jaguar. Both offer similar levels of performance reaching speeds of just over a petaflop. However, Roadrunner requires less than half the amount of energy than the Cray system. Roadrunner is nearly three time more energy-efficient than the Cray system when view megaflop/watt. IBM’s number one system performs 444.9 megaflops per watt of energy compared only 154.2 megaflops per watt for the number 2 system.
- According to last November's Green500 list, IBM had 23 of the top 25 "Green" or energy efficient supercomputers in the world. The report found that the top 20 most energy efficient supercomputers in the world are built on IBM high performance computing technology. The list includes supercomputers from across the globe being used for a variety of applications such as astronomy, climate prediction and pharmaceutical research. IBM also holds 39 of the top 50 positions on this list.
Anticipated Future List Trends
- Reinforcing long term TOP500 list trends over time will become more important as we move towards Exascale performance.
- The pending reality of Exascale computing can already be found on the June list. Dawn, the IBM BlueGene Supercomputer debuted at number 9 on the List. Dawn is the initial delivery system to IBM's 20 petaflop Sequoia supercomputer, which is at least 10 times more powerful than any system announced from any other vendor. Sequoia represents more total compute power than all the systems in November's Top500 list combined.
- The industry is experiencing rapid change not visible by quickly reviewing the results of the June 09 Top500 List.
Former number one supercomputer provider NEC recently announced they abandoned efforts to build its next generation supercomputer for the Japanese government. Fujitsu has also announced it was abandoning this effort.
- Additional recent announcements from several companies with prominent systems on November's List seriously call into question any potential future supercomputing offerings. SGI, was recently bought by Rackable and Sun, which is expected to be bought by Oracle,
- Combined, NEC, SGI and Sun accounted for 20 percent of the 25 systems. Removing these systems dramatically changes the landscape of the top 25.
Diese Pressemitteilungen könnten Sie auch interessieren
Weitere Informationen zum Thema "Hardware":
Was ist ein Dedicated Server?
Ein Dedicated Server (dedizierter Server) ist ein Server, der seine volle Leistung für nur eine bestimmte Funktion zur Verfügung stellt. Im Online-Marketing werden damit Angebote bezeichnet, bei dem ein Kunde das exklusive Nutzungsrecht für ein einzelnes Gerät in einem Data Center erhält.Weiterlesen