Energy Efficiency

Published on June 27th, 2008 | by Timothy B. Hurst

36

Cooling Data Centers Could Prevent Massive Electrical Waste

June 27th, 2008 by  

Cables running into servers at a data center

It is estimated that the data storage sector consumed about 61 billion kilowatt-hours (kWh) in 2006 (1.5% of total U.S. consumption, or roughly equivalent to the amount consumed by 5.8 million average U.S. households). These numbers are only expected to grow.

The energy used by the nation’s servers and data centers is growing at an unsustainable rate. Not only that, but web servers are notoriously inefficient. For example, computer servers are used at only 6 percent of their capacity on average, while data center facilities operate at roughly 65% to 75% efficiency, meaning that 25% to 35% of all the energy consumed by servers is wasted (converted to heat).

If we are to even consider reducing our energy consumption and carbon footprint, the growing demands generated by our web servers must be near the top of the list of possible improvements. And the Department of Energy agrees.

Researchers at DOE’s Pacific Northwest National Laboratory (PNNL) in Washington and National Renewable Energy Laboratory (NREL) in Colorado are hard at work figuring out ways to make our data storage infrastructure more efficient by running them at lower temperatures. The technology exists to achieve efficiencies of 80% to 90% in conventional server power supplies. Moving this heat source away from the server allows the cooling efforts to be focused on the computing elements.

Alternative Cooling Approaches

(from PNNL’s Energy Smart Data Center)

  • Evolutionary progress is being made with conventional air cooling techniques that are known for their reliability. Current investigation focuses on novel heat sinks and fan technologies with the aim to improve contact surface, conductivity, and heat transfer parameters.
  • One of the most effective air cooling options is Air Jet Impingement. The design and manufacturing of nozzles and manifolds for jet impingement is relatively simple.
  • The same benefits that apply to Air Jet Impingement are exhibited in Liquid Impingement technologies. In addition, liquid cooling offers higher heat transfer coefficients as a tradeoff for higher design and operation complexity.
  • One of the most interesting liquid cooling technologies are microchannel heat sinks in conjunction with micropumps because the channels can be manufactured in the micrometer range with the same process technologies used for electronic devices.
  • Liquid metal cooling, used in cooling reactors, is starting to be an interesting alternative for high-power-density micro devices. Large heat transfer coefficients are achieved by circulating the liquid with hydroelectric or hydromagnetic pumps. The pumping circuit is reliable because no moving parts, except for the liquid itself, are involved in the cooling process. Heat transfer efficiency is also increased by high conductivity. The low heat capacity of metals leads to less stringent requirements for heat exchangers.
  • Heat extraction with liquids can be increased by several orders of magnitude by exploiting phase changes. Heat pipes and Thermosyphons exploit the high latent heat of vaporization to remove large quantities of heat from the evaporator section. The circuits are closed by either capillary action in the case of heat pipes or gravity in the case of Thermosyphons. These devices are therefore very efficient but are limited in their temperature range and heat flux capabilities.
  • Thermoelectric Coolers have the ability to provide localized spot cooling, an important capability in modern processor design. Research in this area focuses on improving materials and distributing control of TEC arrays such that the efficiency over the whole chip improves.

Data center, data servers

Capturing Waste Heat

Reusing the waste heat from a data center may not make the server room itself more efficient, but depending on how heat is reused, it can save a company a significant sum of money. In its report to Congress last year on data center energy consumption, the federal Environmental Protection Agency suggested the practice. And the idea has gained traction, according to Mark Fontecchio of SearchDataCenter.com.

For example, in Winnipeg, Canada a media company called, Quebecor, efforts have been made to take the heat from the 2,500-square-foot data center on the ground floor and use it to heat other parts of the building.

Because of the cool Winnipeg climate, engineers decided to make use of that cool air by installing air-side economizers that draw in outside air. The economizers include baffles that open to varying degrees depending on the outside temperature and how much cooling the data center needs.

After the air cycles through the approximately 100 eight-way servers, it warms up in the process. It then goes into an overhead plenum, where about 10% of the air is re-circulated to warm the outside air that comes into the data center.

Another duct out of the exhaust plenum to the intake duct of the editorial office upstairs. Quebecor also added a second thermostat to its editorial offices; the first controls the traditional heating furnaces. That whole process used up another 60% of the waste heat. The data center dumps the remaining 30% into the adjacent warehouse.


Check out our new 93-page EV report.

Join us for an upcoming Cleantech Revolution Tour conference!

Tags: , , , , ,


About the Author

is the founder of ecopolitology and the executive editor at LiveOAK Media, a media network about the politics of energy and the environment, green business, cleantech, and green living. When not reading, writing, thinking or talking about environmental politics with anyone who will listen, Tim spends his time skiing in Colorado's high country, hiking with his dog, and getting dirty in his vegetable garden.



  • Mikito Ohara

    I can sense that heat exchangers and cooling towers has something to do with this idea..

    Cooling Towers Perth

  • Pingback: Yahoo is Crowing Over New Energy Efficient Chicken Coop…Er, Data Center, That Is – CleanTechnica()

  • sandrar

    Hi! I was surfing and found your blog post… nice! I love your blog. 🙂 Cheers! Sandra. R.

  • sandrar

    Hi! I was surfing and found your blog post… nice! I love your blog. 🙂 Cheers! Sandra. R.

  • LH

    I’m interested in the liquid metal cooling idea. I know of a company called nanocoolers that proposed a cooling system that used gallium as the coolant. They couldn’t get any traction with the idea and have since gone out of business. Is there someone else that’s working on it again?

    Also the industry is working very hard to promote and improve energy efficiency. There are two relavent industry agencies: ASHRAE (American Society of Heating, Refrigerating and Air Conditioning Engineers) technical committee 9.9 and The Green Grid. T.C 9.9 has published several books: Best Practices for Datacom Facility Energy Efficiency, High Density Data Centers – Case Studies and Best Practices, Liquid Cooling Guidelines for Datacom Equipment Centers which can be purchased at the ASHRAE.com bookstore. They are also hosting 5 free workshops in NY, http://www.ashrae.org/pressroom/detail/16615. The Green Grid’s website is thegreengrid.org.

  • LH

    I’m interested in the liquid metal cooling idea. I know of a company called nanocoolers that proposed a cooling system that used gallium as the coolant. They couldn’t get any traction with the idea and have since gone out of business. Is there someone else that’s working on it again?

    Also the industry is working very hard to promote and improve energy efficiency. There are two relavent industry agencies: ASHRAE (American Society of Heating, Refrigerating and Air Conditioning Engineers) technical committee 9.9 and The Green Grid. T.C 9.9 has published several books: Best Practices for Datacom Facility Energy Efficiency, High Density Data Centers – Case Studies and Best Practices, Liquid Cooling Guidelines for Datacom Equipment Centers which can be purchased at the ASHRAE.com bookstore. They are also hosting 5 free workshops in NY, http://www.ashrae.org/pressroom/detail/16615. The Green Grid’s website is thegreengrid.org.

  • Al

    A couple of quick points:

    1) While industry is using some sub-optimum AC/DC power supplies, the limit of power supplies efficiency may not be at 75% that you referenced.

    “Power-converter efficiencies are in the upper nineties and quickly approaching the magic 100% barrier.” (Hearst Electronic Products Monday June 30th 2008, LOU PECHI Power-One, Camarillo, CA

    http://www.power-one.com)

    2) A more easily achievable goal would be to increase the server use from “6%”. Virtualization is available for most server platforms and increase the usage factor significantly. This approach to reducing the heat load in a data center is true heat savings (Planning figures are about 10 to 1, that is one virtualized server can handle 10 current servers load).

    3) New technology hard drives, i.e. solid state drives (SSD), decrease the heat load in a data center by reducing the power needed to turn drives. While still fairly expensive compared to conventional drives, SSDs can quickly pay for themselves by reduced power and heat. If I remember correctly they require about 10% of a conventional HD.

  • Al

    A couple of quick points:

    1) While industry is using some sub-optimum AC/DC power supplies, the limit of power supplies efficiency may not be at 75% that you referenced.

    “Power-converter efficiencies are in the upper nineties and quickly approaching the magic 100% barrier.” (Hearst Electronic Products Monday June 30th 2008, LOU PECHI Power-One, Camarillo, CA

    http://www.power-one.com)

    2) A more easily achievable goal would be to increase the server use from “6%”. Virtualization is available for most server platforms and increase the usage factor significantly. This approach to reducing the heat load in a data center is true heat savings (Planning figures are about 10 to 1, that is one virtualized server can handle 10 current servers load).

    3) New technology hard drives, i.e. solid state drives (SSD), decrease the heat load in a data center by reducing the power needed to turn drives. While still fairly expensive compared to conventional drives, SSDs can quickly pay for themselves by reduced power and heat. If I remember correctly they require about 10% of a conventional HD.

  • Tom Nats

    This is a great article and it shows how the times are changing.

    We are currently undertaking a air-side economizer project for our data center. You can read more about it here: http://redrocksdatacenter.com/green and we’ve started a blog to cover the construction: http://ae.redrocksdatacenter.com

    In addition, we’ll begin construction on a new building next year that will be 100% DC. We sit on over 85 acres and are planning on putting up > 30Kw of solar arrays. These are fun projects and hopefully other data center owners will take notice.

    Tom Nats

    Red Rocks Data Center

    rrdc_info @ redrocksdatacenter.com

  • Tom Nats

    This is a great article and it shows how the times are changing.

    We are currently undertaking a air-side economizer project for our data center. You can read more about it here: http://redrocksdatacenter.com/green and we’ve started a blog to cover the construction: http://ae.redrocksdatacenter.com

    In addition, we’ll begin construction on a new building next year that will be 100% DC. We sit on over 85 acres and are planning on putting up > 30Kw of solar arrays. These are fun projects and hopefully other data center owners will take notice.

    Tom Nats

    Red Rocks Data Center

    rrdc_info @ redrocksdatacenter.com

  • Jimmy Dolittle

    The hospital I work for has spent a LOT of money on HVAC for their server rooms. I am talking a LOT of moola.

    JT

    http://www.FireMe.to/udi

  • Jimmy Dolittle

    The hospital I work for has spent a LOT of money on HVAC for their server rooms. I am talking a LOT of moola.

    JT

    http://www.FireMe.to/udi

  • Kevin

    Uh, heat sinks, fans, liquid cooling, etc., don’t make things more efficient. They simply provide a way to move heat from one place to another. Once the heat is given up, it is lost with respect to being useful computationally. I think you’re not understanding this important fact.

  • Kevin

    Uh, heat sinks, fans, liquid cooling, etc., don’t make things more efficient. They simply provide a way to move heat from one place to another. Once the heat is given up, it is lost with respect to being useful computationally. I think you’re not understanding this important fact.

  • Ov3rTheHill

    …oh and tansformers don’t work on DC. So the inside of your server, which needs oh I dunno maybe +5VDC at 100 Amps, +12VDC at 5 Amps, +1.blah VDC at a few Amps all out of one power supply is NOT going to get that magically from ONE DC power bus in the data center. It’s going to get it the way it has for the last 30 years, whith an efficient switching power supply. These rectify the AC line, make high freq AC with an oscillator, use a toroidal transformer with multiple secondary windings to supply raw low volt AC, and having rectifiers, filters and regulation after each winding to supply the different regulated DC voltages needed inside the box. Now… the DC power bus proposal would ONLY eliminate the front-side diodes and caps rectifying the input AC input power, i.e. converting it to DC. So every switching power supply takes lo freq hi volt lo current AC in, rectifies it to hi volt lo current DC, uses an oscillator to convert that to low current high volt AC, uses a transformer to step that down to various lo volt AC high current outputs, each of which is independently rectified (very efficiently with synchronous rectifiers) to the various lo volt high current DC power levels needed inside the server box. So the proposed HV DC power bus would only eliminate the front side AC to DC rectification in the switch mode power supply, but not the rest of it. No…. an idle CPU is still burning watt-hours, and I agree that virtualization will yield far more efficiency gains than going to a non-standard-for-the-next-10-years power distribution bus. We will pay thru the nose for that, until economies of scale bring its cost down, and its… oh i dunno… ?5%? power savings might take 30 years to recoup? Show us what THIS bright idea will cost and what real percentage power savings it will bring. Show us some real numbers, and we will compare that to unplugging 2 or 3 servers because virtualization might let us do that, depending on the application loads.

  • Ov3rTheHill

    …oh and tansformers don’t work on DC. So the inside of your server, which needs oh I dunno maybe +5VDC at 100 Amps, +12VDC at 5 Amps, +1.blah VDC at a few Amps all out of one power supply is NOT going to get that magically from ONE DC power bus in the data center. It’s going to get it the way it has for the last 30 years, whith an efficient switching power supply. These rectify the AC line, make high freq AC with an oscillator, use a toroidal transformer with multiple secondary windings to supply raw low volt AC, and having rectifiers, filters and regulation after each winding to supply the different regulated DC voltages needed inside the box. Now… the DC power bus proposal would ONLY eliminate the front-side diodes and caps rectifying the input AC input power, i.e. converting it to DC. So every switching power supply takes lo freq hi volt lo current AC in, rectifies it to hi volt lo current DC, uses an oscillator to convert that to low current high volt AC, uses a transformer to step that down to various lo volt AC high current outputs, each of which is independently rectified (very efficiently with synchronous rectifiers) to the various lo volt high current DC power levels needed inside the server box. So the proposed HV DC power bus would only eliminate the front side AC to DC rectification in the switch mode power supply, but not the rest of it. No…. an idle CPU is still burning watt-hours, and I agree that virtualization will yield far more efficiency gains than going to a non-standard-for-the-next-10-years power distribution bus. We will pay thru the nose for that, until economies of scale bring its cost down, and its… oh i dunno… ?5%? power savings might take 30 years to recoup? Show us what THIS bright idea will cost and what real percentage power savings it will bring. Show us some real numbers, and we will compare that to unplugging 2 or 3 servers because virtualization might let us do that, depending on the application loads.

  • Ov3rTheHill

    Seems to me a WHOLE LOT of context is missing from this article. Going to a DC power bus for distribution to servers and networking gear means the data center’s UPS doesn’t need an inverter on its backside. And the individual pieces of gear (servers, switches) don’t need a rectifier on their front side of their switching power supplies. That’s all. And if it were a relatively high voltage bus, say 350 VDC, the busses and power cables would need 1/3 the copper of standard 115 VAC, again not a biggie in efficienty. A new industry standard, with its own style power connectors and with a whole lot more careful attention to polarity. Plug in your server to this power bus backwards and it’s toast, not so with an AC power bus. When it comes to heat production in data centers, rectification (AC to DC, in each power supply) and inversion (DC to AC, in the continuously online UPS) aren’t the first things that come to mind… unless one makes power distribution equipment for a living and does not make servers or networking gear. The telcos have used 48 VDC for years, but that’s because the original exchanges really did run on batteries, 24 lead-acid cells in series. When it comes to wasted energy, a lost Watt-hour is a lost Watt-hour, whether it originally came from DC or AC.

  • Ov3rTheHill

    Seems to me a WHOLE LOT of context is missing from this article. Going to a DC power bus for distribution to servers and networking gear means the data center’s UPS doesn’t need an inverter on its backside. And the individual pieces of gear (servers, switches) don’t need a rectifier on their front side of their switching power supplies. That’s all. And if it were a relatively high voltage bus, say 350 VDC, the busses and power cables would need 1/3 the copper of standard 115 VAC, again not a biggie in efficienty. A new industry standard, with its own style power connectors and with a whole lot more careful attention to polarity. Plug in your server to this power bus backwards and it’s toast, not so with an AC power bus. When it comes to heat production in data centers, rectification (AC to DC, in each power supply) and inversion (DC to AC, in the continuously online UPS) aren’t the first things that come to mind… unless one makes power distribution equipment for a living and does not make servers or networking gear. The telcos have used 48 VDC for years, but that’s because the original exchanges really did run on batteries, 24 lead-acid cells in series. When it comes to wasted energy, a lost Watt-hour is a lost Watt-hour, whether it originally came from DC or AC.

  • JP

    Mr. Sinister and Jweezy — I think you doods missed the point about AC/DC. The author points out that the components run on DC already. Why have redundant power supplies in each server when the data center could theoretically switch the power for all devices centrally? Consider one big power supply (and of course a backup) for all devices instead of hundreds or thousands of small power supplies in each chassis.

    http://en.wikipedia.org/wiki/Economies_of_scale

  • JP

    Mr. Sinister and Jweezy — I think you doods missed the point about AC/DC. The author points out that the components run on DC already. Why have redundant power supplies in each server when the data center could theoretically switch the power for all devices centrally? Consider one big power supply (and of course a backup) for all devices instead of hundreds or thousands of small power supplies in each chassis.

    http://en.wikipedia.org/wiki/Economies_of_scale

  • Armando

    Many data centers are DC supplied by replacing the 120volt power supplies with 48 volt models that connect to a DC bus bar in the system’s rack. This is considerably more efficient and also allows for the heat generated by the power supply to be contained separately from the rest of the hardware.

    You are probably connecting for the internet through hardware running on 48 volt DC power. Phone and cable companies almost exclusively use these types of systems and have you looked into Cisco equipment lately? You can order almost any router from them with a DC power supply instead of AC. So please, don’t post your stupidity unless you have some idea of what you are talking about.

  • Armando

    Many data centers are DC supplied by replacing the 120volt power supplies with 48 volt models that connect to a DC bus bar in the system’s rack. This is considerably more efficient and also allows for the heat generated by the power supply to be contained separately from the rest of the hardware.

    You are probably connecting for the internet through hardware running on 48 volt DC power. Phone and cable companies almost exclusively use these types of systems and have you looked into Cisco equipment lately? You can order almost any router from them with a DC power supply instead of AC. So please, don’t post your stupidity unless you have some idea of what you are talking about.

  • Mr. McElwain

    Telco operations had been all DC for quite a while.

    Essentially telephone switching centers running at 48V DC were the original computer data centers.

    With the switch to cheaper generic components Telcos have relaxed their requirement for DC systems and have migrated over the years to AC.

    It would be interesting to see the world switch back to the Telco 48V standard!

  • Mr. McElwain

    Telco operations had been all DC for quite a while.

    Essentially telephone switching centers running at 48V DC were the original computer data centers.

    With the switch to cheaper generic components Telcos have relaxed their requirement for DC systems and have migrated over the years to AC.

    It would be interesting to see the world switch back to the Telco 48V standard!

  • “The energy used by the nation’s servers and data centers is growing at an unsustainable rate. Not only that, but web servers are notoriously inefficient. For example, computer servers are used at only 6 percent of their capacity on average”

    I believe in the future data centers will deploy more of a virtual private system approach on large scale servers. It would greatly improve efficiency.

  • “The energy used by the nation’s servers and data centers is growing at an unsustainable rate. Not only that, but web servers are notoriously inefficient. For example, computer servers are used at only 6 percent of their capacity on average”

    I believe in the future data centers will deploy more of a virtual private system approach on large scale servers. It would greatly improve efficiency.

  • Ken Rice

    DC Power…. Ok People… what 99% of you dont realize is that the components INSIDE Your computers (desktops, laptops, servers, whatever ARE DC… thats the whole point of the power supply.. Take 110/220 VAC in and convert it to a coupleof different DC voltages…

    Now, having used DC power for years in Telecom and Telecom related environments I dont see how people can say there is not expertise out there for this stuff in the DataCenter… one just has to visit a place Like Level3’s Courtland St Data Center in Atlanta to see that it is possible…

    Not only is DC possible but it reduces the complexities of Redundant Power via elimination of costly and inefficient DC to AC Inverters…

  • Ken Rice

    DC Power…. Ok People… what 99% of you dont realize is that the components INSIDE Your computers (desktops, laptops, servers, whatever ARE DC… thats the whole point of the power supply.. Take 110/220 VAC in and convert it to a coupleof different DC voltages…

    Now, having used DC power for years in Telecom and Telecom related environments I dont see how people can say there is not expertise out there for this stuff in the DataCenter… one just has to visit a place Like Level3’s Courtland St Data Center in Atlanta to see that it is possible…

    Not only is DC possible but it reduces the complexities of Redundant Power via elimination of costly and inefficient DC to AC Inverters…

  • Moltar

    There are already many servers available with DC power. You can set up today an enterprise-class datacenter with all DC powered components, from servers to switches to routers to storage networks. IBM, Sun, and I’m sure many others have commodity servers with DC options. See IBM X3650 T for an example.

  • Moltar

    There are already many servers available with DC power. You can set up today an enterprise-class datacenter with all DC powered components, from servers to switches to routers to storage networks. IBM, Sun, and I’m sure many others have commodity servers with DC options. See IBM X3650 T for an example.

  • Mr. Sinister

    It’s important to realize that data centers aren’t composed of top-secret, specialized supercomputers. They’re built from the same servers, routers, disk drives, etc. that are used by businesses and individuals every day…just a whole lot more of them. These components are mass produced for the marketplace. It would be impractical to build components that operate solely on DC power when there is no customer base, and impractical to convert a data center to DC power distribution when there is no equipment available to take advantage of it…simple chicken-and-egg situation.

    Yes, there is a lot of room for improvement in efficiency of computing equipment. However, it’s not fair to single-out data centers as wasteful power hogs. Don’t forget about the millions of personal computers in homes and offices around the world. Every one of them has a little box inside that converts AC to DC (quite inefficiently, if you believe the article) dumping waste heat in the process. And every one of them has a processor chipset inside which also wastes energy. Time and effort are better spent in improving the efficiency of the components which make up the data centers than figuring out more elaborate ways to cool an inefficient design. Incidentally, equipment manufacturers are already doing this…multi-core processors that operate at lower speeds, for instance.

    Average server utilization is also an irrelevent statistic. Yes, on average, a server may use only 6-percent of it’s capacity. However, these systems must be over-designed to handle peak load for the simple reason that Mr. Websurfer won’t tolerate having to wait more than five seconds for his MySpace page to load. For certain, these servers operate near full utilization at peak times of the day, even if they are lightly loaded at other times. Data centers are designed as they are because of the demands we place on our IT infra-structure. We want it cheap, fast, and reliable…it’s hard to get all that and high-efficiency too.

  • Mr. Sinister

    It’s important to realize that data centers aren’t composed of top-secret, specialized supercomputers. They’re built from the same servers, routers, disk drives, etc. that are used by businesses and individuals every day…just a whole lot more of them. These components are mass produced for the marketplace. It would be impractical to build components that operate solely on DC power when there is no customer base, and impractical to convert a data center to DC power distribution when there is no equipment available to take advantage of it…simple chicken-and-egg situation.

    Yes, there is a lot of room for improvement in efficiency of computing equipment. However, it’s not fair to single-out data centers as wasteful power hogs. Don’t forget about the millions of personal computers in homes and offices around the world. Every one of them has a little box inside that converts AC to DC (quite inefficiently, if you believe the article) dumping waste heat in the process. And every one of them has a processor chipset inside which also wastes energy. Time and effort are better spent in improving the efficiency of the components which make up the data centers than figuring out more elaborate ways to cool an inefficient design. Incidentally, equipment manufacturers are already doing this…multi-core processors that operate at lower speeds, for instance.

    Average server utilization is also an irrelevent statistic. Yes, on average, a server may use only 6-percent of it’s capacity. However, these systems must be over-designed to handle peak load for the simple reason that Mr. Websurfer won’t tolerate having to wait more than five seconds for his MySpace page to load. For certain, these servers operate near full utilization at peak times of the day, even if they are lightly loaded at other times. Data centers are designed as they are because of the demands we place on our IT infra-structure. We want it cheap, fast, and reliable…it’s hard to get all that and high-efficiency too.

  • jweezy-

    Really? Retarded? Ouch.

    I strongly suggest you re-read the post. I would especially urge you to find the portion of the text where I propose the idea of “ALL DC POWER” [your quotes]. If you are able to find that, let me know and I will buy you a beer.

    At no point in the post do I suggest we convert to all DC power and replace every electronic device in every commercial and residential building.

    I mean no disrespect, but I honestly have no idea what you are talking about.

  • jweezy-

    Really? Retarded? Ouch.

    I strongly suggest you re-read the post. I would especially urge you to find the portion of the text where I propose the idea of “ALL DC POWER” [your quotes]. If you are able to find that, let me know and I will buy you a beer.

    At no point in the post do I suggest we convert to all DC power and replace every electronic device in every commercial and residential building.

    I mean no disrespect, but I honestly have no idea what you are talking about.

  • jweezy

    timmy – you’re retarded!!! do some research man!?

    YOU’RE IDEA OF “ALL DC POWER” IS COMPLETELY IMAGINARY… DO YOU EXPECT TO REPLACE EVERY ELECTRONIC DEVICE IN EVERY COMMERCIAL AND RESIDENTIAL BUILDING!? DC POWER IS ONLY USEFUL (ECONOMICAL) IN HVDC TRANSMISSION LINES, NOT DISTRIBUTION LINES. “AC OR DC?” WAS DECIDED AND PROVEN HUNDREDS OF YEARS AGO BETWEEN EDISON AND TESLA!

  • jweezy

    timmy – you’re retarded!!! do some research man!?

    YOU’RE IDEA OF “ALL DC POWER” IS COMPLETELY IMAGINARY… DO YOU EXPECT TO REPLACE EVERY ELECTRONIC DEVICE IN EVERY COMMERCIAL AND RESIDENTIAL BUILDING!? DC POWER IS ONLY USEFUL (ECONOMICAL) IN HVDC TRANSMISSION LINES, NOT DISTRIBUTION LINES. “AC OR DC?” WAS DECIDED AND PROVEN HUNDREDS OF YEARS AGO BETWEEN EDISON AND TESLA!

Back to Top ↑