Follow @GreenICT
World's top-ranked feed for Green ICT!
Google's Green ICT Updates
Google offers frequent updates on its Green ICT progress. Here is the most recent, along with past updates.
Photos, inside and out, of Google's data centers. Note that most locations include a reference to some green initiative.
This has not been detailed on Google's useful blog, but Grist reports that "Google’s new $700 million data centers in Taiwan will make ice at night, when electricity is significantly cheaper, and use it to cool the buildings during the day."
Google announced in January 2012, "All of our U.S. owned and operated data centers have received ISO 14001…certification. We’re the first major Internet services company to gain external certification for those high standards at all of our U.S. data centers." Here are some of the specifics.
"To reduce the environmental impact of [emergency backup] generators, we’ve done two things: first, we minimized the amount of run time and need for maintenance of those generators. Second, we worked with the oil and generator manufacturers to extend the lifetime between oil changes. So far we’ve managed to reduce our oil consumption in those generators by 67 percent."
"…each of our servers in the data center has a battery on board to eliminate any interruptions to our power supply. To ensure the safety of the environment and our workers, we devised a system to make sure we handle, package, ship and recycle every single battery properly."
Past Updates
Google's Bill Weihl gave an update on ICT sustainability at Green:Net 2010. Livestream has posted the video; here the items I found useful. (Time location in video for each item is in parentheses in mm:ss format.)
Edge gear (clients, etc.) already represent about half of ICT GHG emissions and will approach three-fifths by 2020. (01:20). This is not new data, but reminds us that Green ICT has to focus on the edge, as well as the facilities/telecom infrastructure, to have maximum impact. See Vertatique's inventory of 12 billion edge devices.
He repeats the 2007 statistic that typical PUE is 2.0 (04:50), but there is evidence suggesting average PUE is significantly worse. Nonetheless, Weihl argues that "A PUE of 1.5 or less should be achievable in most facilities" without a "purpose designed" facility or "without using any exotic technology." (11:00) Weihl urges us to "spec high efficiency components, from power distribution infrastructure all the way through your computing systems (including client systems" (12:00). Ironically, this could increase a facilities' PUE, so it is important to remember that minimizing total in energy consumption, not just achieving good consumption efficiency ratios, is the bottom line.
Cooling accounts for more than 70% of non-IT data center power consumption. His solutions: keep hold and cold separate, turn up the thermostat, and give chillers a rest. More on innovative cooling. (05:25)
Google's practice: "typically run our data centers at around 80 degrees Fahrenheit." (07:35) But "make sure you've eliminated hot spots." (09:25)
Google is experimenting with simple aisle end caps and meat-locker curtains to separate hot and cold air at one of its corporate data centers. "There are very cheap solutions...you can apply in virtually any data center" for hot/cold-aisle containment. The vinyl curtains appear to be both cheaper and more flexible than hard-wall containment systems. (picture at 08:20)
Google uses economizers (evaporative cooling, cooling towers, free air) to minimize chiller use (10:00)
One of the most interesting topics came up in the Q&A, when an audience member asked about the authority granted Google by the Federal Energy Regulatory Commission (FERC) in February to buy and sell power. Weihl explained it in terms of being able to quickly stimulate greater investment in renewable power generation. The idea is that Google can commit to large, long-term contracts for its facilities early on, knowing it can resell the excess power until it is fully needed. "We don't want to be an energy trader…the next Enron." (18:45)
In the forefront is Google's participation in the Climate Savers Computing Initiative, which "brings together industry, consumers and conservation organizations to significantly increase the energy efficiency of computers and servers."
For its own use, the company is building a large server farm in the The Dalles of Oregon, citing the region's renewable energy (hydroelectric) and temperate climate (reduced cooling). Waste heat is reported to be further addressed by evaporative cooling, using water pumped through the data center from the Columbia River.
Google is also working with computer manufactures to build servers that only require 12V DC power, an approach more energy-efficient than the current practice of each unit requiring 120V AC and then converting it to DC internally.
Update 2009.02.18
Google has made significant advancements in sustainable data centers and has posted detailed reports at http://www.google.com/corporate/green/datacenters/
- Login to post comments
Good Info
Saving power in Cooling server rooms has become a real challenge to all the Data Centers.... Cold Aisle containment would be the best solution , The Curtains seems to be Cheaper and more Effective...