Category Archives: GPUs

Folding@Home on GeForce RTX 3090 Review

Hi everyone, sorry for the delay in blog posts. Electricity in Connecticut has been so expensive lately that except for our winter heating Folding@Home cluster, it wasn’t affordable to keep running all those GPUs (even with our solar panels, which is really saying something). However, I did manage to get some good data on the top-tier Nvidia RTX 3090, which I got during COVID as the GPU in a prebuilt HP Omen gaming desktop. I transplanted the 3090 into my benchmark desktop, so these stats are comparable to previous cards I’ve tested.

Wait, what are we doing here?

For those just joining, this is a blog about optimizing computers for energy efficiency. I’m running Folding@Home, a distributed computing research project that uses your computer to help fight diseases such as cancer and covid and a host of other ailements. For more information, check out the project website here: https://foldingathome.org/

Look at this bad boy!

This is the HP OEM version of an RTX 3090. I was impressed that it had lots of copper heat pipes and a metal back plate. Overall this was a very solid card for an OEM offering.

HP OEM Nvidia RTX 3090 installed in my AMD Ryzen 9 3950X benchmark desktop

At the time of my testing, the RTX 3090 was the top-tier card from Nvidia’s new Ampere line. They have since released the 3090 Ti, which is ever so slightly faster. To give you an idea of where the RTX 3090 stacks compared to the previous cards I have tested, here is a table. Note that 350 watt TDP! That is a lot of power for this air cooler to dissipate.

The Test

I ran Folding@Home on my benchmark desktop in Windows 10, using Folding@Home client 7.6.13. I was immediately blown away by the insane Points Per Day (PPD) that the 3090 can spit out! Here’s a screen shot of the client, where the card was doing a very impressive 6.4 million PPD!

What was really interesting about the 3090 though was how much variation there was in performance depending on the size of the molecule being worked on. Very large molecules with high atom counts benefited greatly from the number of CUDA cores on this card, and it kicked butt in both raw performance (PPD) and effiency (PPD/Watt). Smaller molecules, however, did not fully utilize this card’s impressive potential. This resulted in a less efficiency and more wasted power. I would assume that running two smaller Ampere cards, for example the 3080, with small models would be more efficient than using the 3090 for small models, but I haven’t got any 3080’s to test that assumption (yet!).

In the plots below, you can see that the smaller model (89k atoms) resulted in a peak PPD of about 4 million, as opposed to the 7 million PPD with a 312k atom model. PPD/watt at 100% card power was also less efficient for the smaller model, coming in at about 16,500 PPD/Watt vs. 10,000 PPD/Watt. These are still great efficiency numbers, which shows how far GPU computing has come from previous generations.

Reduce GPU TDP Power Target to Improve Efficiency

I’ve previously shown how GPUs are set up for maximum performance out of the box, which makes sense for video gaming. However, if you are trying to maximize energy efficiency of your computational machines, reducing the power target of the GPU can result in massive efficiency gains. The GeForce RTX 3090 is a great example of this. When solving large models, this beast of a card benefits from throttling the power down, gaining 2.35% improved energy efficiency with a power target set for 85%. However, the huge improvement comes for solving smaller models. When running the 89k atom work unit, I got a whopping 29% efficiency improvement when setting the power target to 55% with only a 14% performance reduction! Since the F@H project gives out a lot of smaller work units in addition to some larger ones, I chose to run my machine at a 75% power target. On average, this splits the difference, and gives a noticeable efficiency improvement without sacrificing raw PPD performance too much. In the RTX 3090’s case, a 75% power target massively reduced the power draw on the computer (reduced wall consumption from 434 to 360 watts), as well as reduced heat and noise coming out of the chassis. This promotes a more happy office environment and a happier computer, that will last longer!

Tuning Results: 89K Atoms (Small Model)

Here are the tuning plots for a smaller molecule. In all cases, the X-axis is the power target, set in the Nvidia Driver. 100% corresponds to 350 Watts in the case of the RTX 3090.

Tuning Results: 312K Atoms (Large Model)

And here are the tuning results for a larger molecule.

Overall Results

Here are the comparison results to the previous hardware configurations I have tested. Note that now that the F@H client supports enabling CUDA, I did some tests with CUDA on vs. off with the RTX 2080 Ti and the 3090. Pro Tip: MAKE SURE CUDA IS ON! It really speeds things up and also improves energy efficiency.

Key takeaways from below is that the 3090 offers 50% more performance (PPD) than the 2080 Ti, and is almost 30% more energy efficient while doing it! Note this does not mean this card sips power…it actually uses more watts than any of the other cards I’ve tested. However, it does a lot more computation with those watts, so it is putting the electricity to better use. Thus, a data center or workstation can get through more work in a shorter amount of time with 3090s vs. other cards, and thus use less power overall to solve a given amount of work. This is better for the environment!

Nvidia RTX 3090 Folding@Home Performance (green bars) compared to other hardware configurations
Nvidia RTX 3090 Folding@Home Total System Power Consumption (green bars) compared to other hardware configurations
Nvidia RTX 3090 Folding@Home Energy Efficiency (green bars) compared to other hardware configurations.

Conclusion

The flagship Ampere architecture Nvidia GeForce RTX 3090 is an excellent card for compute applications. It does draw a ton of power, but this can be mitigated by reducing the power target in the driver to gain efficiency and reduce heat and noise. In the case of Folding@Home disease research, this card is a step change in both performance and energy efficiency, offering 50% more compute power and 30% more efficiency than the previous generation. I look forward to testing out other Ampere cards, as well as the new 40xx “Lovelace” architecture, if Eversource ever drops the electric rate back to normal levels in CT.

Advertisement

How to Make a Folding@Home Space Heater (and why would you want to?)

My normal posts on this site are all about how to do as much science as possible with Folding@Home, for the least amount of power. This is because I think disease research, while a noble and essential cause, shouldn’t be done without respecting the environment.

With that said, I think there is a use case for a power-hungry, inefficient Folding@Home computer. Namely, as a space heater for those in colder climates.

The logic is this: Running Folding@Home, or any other piece of software, makes your computer do work. Electricity flows through the circuits, flipping tiny silicon switches, and producing heat in the process. Ultimately all of the energy that flows into your computer comes back out as heat (well, a small amount comes out as light, or electromagnetic radiation, or noise, but all of those can and do get converted back into heat as they strike things in the room).

Have you ever noticed how running your gaming computer with the door to your room closed makes your feet nice and toasty in the winter? It’s the same idea. Here, one of my high-performance rigs (dual NVidia 980 Ti GPUs) is silently humming away, putting off about 500 watts of pleasant heat. My son is investigating:

My Folding@Home Space Heater Experiment

Folding@Home uses CPUs and GPUs to run molecular dynamic models to help research understand and fight diseases. You get the most points per day (PPD) by using cutting-edge hardware, but the Folding@Home Consortium and Stanford University openly encourage everyone to run the software on whatever they happen to have.

With this in mind, I started thinking about all the old hardware that is out there…CPUs and graphics cards that are destined for landfills because they are no longer fast enough to do any useful gaming or decode 4K video. People describe this type of hardware as “bricks” or “space heaters”–useful for nothing other than wasting power.

That gave me an idea…

It didn’t take me long to find a sweet deal on an nForce 680i-based system on eBay for $60 shipped (EVGA board with Nvidia n680i chipset, supporting three full-length PCI-E X16 slots). I swapped out the Core 2 Duo that this machine came with for a Core 2 Quad, and purchased four Fermi-based Nvidia graphics cards, plus a used 1300 Watt Seasonic 80+ Gold power supply. All of this was amazingly cheap. The beautiful Antec case was worth the $60 cost of the parts that came with it alone. Because I knew lots of power would be critical here, I spent most of the money on a high-end power supply (also used on eBay). Later on, I found that I needed to also upgrade the cooling (read: cut a hole in the side panel and strap on some more fans).

  • Antec Mid-Tower Case + Corsair 520 Watt PSU, EVGA 680i motherboard, Core 2 Duo CPU, 4 GB Ram, CD Drives, and 4 Fans = $60
  • 2x EVGA Nvidia GeForce GTX 480 graphics cards: $40
  • 1 x EVGA NVidia GeForce GTX 580 Graphics Card: $50
  • 1 x EVGA NVidia GeForce GeForce GTX 460 Graphics Card: $20
  • 1 x PCI-E X1 to X16 Riser: $10
  • 1 x Core 2 Quad Q6600 CPU (Socket 775) – $6
  • 1 x Seasonic 1300 Watt 80+ Gold Modular Power Supply: $90
  • 2 x Noctua 120 MM fans + custom aluminum bracket (for modifying side panel): $60
  • 1 x Arctic Cooling Freezer Tower Cooler – $10
  • 1 x Western Digital Black 640GB HDD – $10

Total Cost (Estimated): $356

This is the cost before I sold some of the parts I didn’t need (Core 2 Duo, Corsair PSU, etc).

Here is a shot of the final build. It took a bit of tweaking to get it to this point.

F@H_Space_Heater_Quad_GPUs

Used Parts Disclaimer!

Note that when dealing with used parts on eBay, it’s always good to do some basic service. For the GPUs in this build, I took them apart, cleaned them, applied fresh thermal paste (Arctic MX-4), and re-assembled. It was good that I did…these cards were pretty gross, and the decade-old thermal paste was dried on from years of use.

 

I mean, come on now, look at the dust cake on the second GTX 480! Clean your graphics cards, random eBay people!

GTX 480 Dust

Here’s how the 3 + 1 GPUs are set up. The two GTX 480s and the GTX 580 are on the mobo in the X16 slots. I remotely mounted the GTX 460 in the drive bay. I used blower-style (slot exhaust) cards on purpose here, because they exhaust 100% of the hot air outside the case. Open-fan style cards would have overheated instantly in this setup.

To keep costs down, I just used Ubuntu Linux as the operating system. I configured the machine for 4-slot GPU folding using proprietary Nvidia drivers. Although I ultimately control all of my remote Linux machines with TeamViewer, it is helpful to have a portable monitor and combo wireless keyboard/mouse for initial configuration and testing. In the shot below (of an earlier config), I learned a lot just trying the get the machine stable with 3 cards.

Space_Heater_Early_Config_Initial_Fireup_small

Initial Testing on the Space Heater (3 GPUs installed). This test showed me that I needed better CPU cooling (hence I chucked that stock Intel cooler)

I also did some thermal testing along the way to make sure things weren’t getting too hot. It turns out this testing was a bit misleading, because the system was running a lot cooler with the side panel off than with it on.

Some Thermal Camera Images During Initial Burn-In (3 GPUs, stock CPU cooler):

Now that’s some heat coming out of this beast! Thankfully, the upgraded 14-gauge power plug and my watt meter aren’t at risk of melting, although they are pretty warm.

Once I had the machine up and running with all four GPUs the final configuration, I found that it produced about 55-95K PPD on average (based on the work unit), with the following breakdown

  • GTX 460: 10-20K PPD
  • GTX 480: 20-30K PPD each
  • GTX 580: 25-45 K PPD

Power consumption, as measured at the wall, ranged from 900 to 1000 watts with all 4 GPUs engaged. By turning different GPUs on and off, I could get varying levels of power (about 200 Watts idle. I typically ran it with one 580 and one 480 folding, for an average power consumption of about 600 watts).

Space_Heater_Power_Consumption

After running the machine for a while, my room was nice and toasty, as expected!

One thing that I should mention was the effect of the two additional intake fans that I mounted in the side panel. Originally I did not have these, and the top graphics card in the stack was hitting 97 degrees C according to the onboard monitoring! After modding this custom side-intake into the case (found a nice fan bracket on Amazon, and put my dremel tool to good use), the temps went down quite a lot. I used fan grilles on the inside of the fans to keep internal cables out of them, and mesh filters on the outside to match the intake filters on the rest of the case.

 

The top card stays under 85 degrees C (with the fan at 50%). The middle card stays under 80 degrees C, and the bottom card runs at 60 degrees C. The GTX 460 mounted in the drive bay never goes over 60 degrees C, but it’s a less powerful card and is mounted on the other side of the case.

Here’s some more pictures of the modded side panel, along with a little cooling diagram I threw together:

PPD, Wattage, and Efficiency Comparison

I debated about putting these plots in here, because the point of this machine was not primarily to make points (pun intended), or to be efficient from a PPD/Watt perspective. The point of this machine was to replace the 1500 watt space heater I use in the winter to keep a room warm.

As you can see, the scientific production (PPD) on this machine, even with 4 GPUs, is not all that impressive in 2020, since the GPUs being used are ten years old. Similarly, the efficiency (PPD/Watt) is terrible. There’s no surprise there, since it averages just under 1000 watts of power consumption at the wall!

Conclusion

It is totally possible to build a (relatively) inexpensive desktop computer out of old, used parts to use as a space heater. If the primary goal is to make heat, then this might not be a bad idea (although at $350, it still costs way more than a $20 heater from Walmart). The obvious benefit is that this sort of space heater is actually doing something useful besides keeping you warm (in this case, helping scientists learn more about diseases thanks to Folding@Home).

Other benefits that I found were the remote control (TeamViewer), which lets me use my cellphone to turn GPUs on and off to vary the heat output. Also, I think running this machine for extended durations in its medium-high setting (700 watts or so) is much healthier for the electrical wiring in my house vs. the constant cycling on and off of a traditional 1500 watt space heater.

From an environmental standpoint, you can do much worse than using electric heat. In my case, electric space heaters make a lot of sense, especially at night. I can shut off the entire heating zone (my house only has two zones) to the upstairs and just keep the bedroom warm. This drastically reduces my fossil fuel usage (good old New England, where home heating oil is the primary method of keeping warm in the winter). Since my house has an 8.23 KW solar panel array on the roof, a lot of my electricity comes directly from the sun, making this electric heat solution even greener.

Parting Thoughts:

I would not recommend running a machine like this during the warmer months. If warm air is not wanted, all the waste heat from this machine will do nothing but rack up your power bill for relatively little science being done. If you want to run an efficient summer-time F@H rig that uses low power (so as to not fight your AC) , check out my article on the GTX 1660 and 1650.

In a future article, I plan to show how I actually saved on heating costs by running Folding@Home space heaters all last winter (with a total of seven Folding@Home desktops placed strategically throughout my house, so that I hardly had to burn any oil).

 

Folding@Home on Turing (NVidia GTX 1660 Super and GTX 1650 Combined Review)

Hey everyone. Sorry for the long delay (I have been working on another writing project, more on that later…). Recently I got a pair of new graphics cards based on Nvidia’s new Turing architecture. This has been advertised as being more efficient than the outgoing Pascal architecture, and is the basis of the popular RTX series Geforce cards (2060, 2070, 2080, etc). It’s time to see how well they do some charitable computing, running the now world-famous disease research distributed computing project Folding@Home.

Since those RTX cards with their ray-tracing cores (which does nothing for Folding) are so expensive, I opted to start testing with two lower-end models: the GeForce GTX 1660 Super and the GeForce GTX 1650.

 

These are really tiny cards, and should be perfect for some low-power consumption summertime folding. Also, today is the first time I’ve tested anything from Zotac (the 1650). The 1660 super is from EVGA.

GPU Specifications

Here’s a quick table I threw together comparing these latest Turing-based GTX 16xx series cards to the older Pascal lineup.

Turing GPU Specs

It should be immediately apparent that these are very low power cards. The GTX 1650 has a design power of only 75 watts, and doesn’t even need a supplemental PCI-Express power cable. The GTX 1660 Super also has a very low power rating at 125 Watts. Due to their small size and power requirements, these cards are good options for small form factor PCs with non-gaming oriented power supplies.

Test Setup

Testing was done in Windows 10 using Folding@Home Client version 7.5.1. The Nvidia Graphics Card driver version was 445.87. All power measurements were made at the wall (measuring total system power consumption) with my trusty P3 Kill-A-Watt Power Meter. Performance numbers in terms of Points Per Day (PPD) were estimated from the client during individual work units. This is a departure from my normal PPD metric (averaging the time-history results reported by Folding@Home’s servers), but was necessary due to the recent lack of work units caused by the surge in F@H users due to COVID-19.

Note: This will likely be the last test I do with my aging AMD FX-8320e based desktop, since the motherboard only supports PCI Express 2.0. That is not a problem for the cards tested here, but Folding@Home on very fast modern cards (such as the GTX 2080 Ti) shows a modest slowdown if the cards are limited by PCI Express 2.0 x16 (around 10%). Thus, in the next article, expect to see a new benchmark machine!

System Specs:

  • CPU: AMD FX-8320e
  • Mainboard : Gigabyte GA-880GMA-USB3
  • GPU: EVGA 1080 Ti (Reference Design)
  • Ram: 16 GB DDR3L (low voltage)
  • Power Supply: Seasonic X-650 80+ Gold
  • Drives: 1x SSD, 2 x 7200 RPM HDDs, Blu-Ray Burner
  • Fans: 1x CPU, 2 x 120 mm intake, 1 x 120 mm exhaust, 1 x 80 mm exhaust
  • OS: Win10 64 bit

Goal of the Testing

For those of you who have been following along, you know that the point of this blog is to determine not only which hardware configurations can fight the most cancer (or coronavirus), but to determine how to do the most science with the least amount of electrical power. This is important. Just because we have all these diseases (and computers to combat them with) doesn’t mean we should kill the planet by sucking down untold gigawatts of electricity.

To that end, I will be reporting the following:

Net Worth of Science Performed: Points Per Day (PPD)

System Power Consumption (Watts)

Folding Efficiency (PPD/Watt)

As a side-note, I used MSI afterburner to reduce the GPU Power Limit of the GTX 1660 Super and GTX 1650 to the minimum allowed by the driver / board vendor (in this case, 56% for the 1660 and 50% for the 1650). This is because my previous testing, plus the results of various people in the Folding@Home forums and all over, have shown that by reducing the power cap on the card, you can get an efficiency boost. Let’s see if that holds true for the Turing architecture!

Performance

The following plots show the two new Turing architecture cards relative to everything else I have tested. As can be seen, these little cards punch well above their weight class, with the GTX 1660 Super and GTX 1650 giving the 1070 Ti and 1060 a run for their money. Also, the power throttling applied to the cards did reduce raw PPD, but not by too much.

Nvidia GTX 1650 and 1660 performance

Power Draw

This is the plot where I was most impressed. In the summer, any Folding@Home I do directly competes with the air conditioning. Running big graphics cards, like the 1080 Ti, causes not only my power bill to go crazy due to my computer, but also due to the increased air conditioning required.

Thus, for people in hot climates, extra consideration should be given to the overall power consumption of your Folding@Home computer. With the GTX 1660 running in reduced power mode, I was able to get a total system power consumption of just over 150 watts while still making over 500K PPD! That’s not half bad. On the super low power end, I was able to beat the GTX 1050’s power consumption level…getting my beastly FX-8320e 8-core rig to draw 125 watts total while folding was quite a feat. The best thing was that it still made almost 300K PPD, which is well above last generations small cards.

Nvidia GTX 1650 and 1660 Power Consumption

Efficiency

This is my favorite part. How do these low-power Turing cards do on the efficiency scale? This is simply looking at how many PPD you can get per watt of power draw at the wall.

Nvidia GTX 1650 and 1660 Efficiency

And…wow! Just wow. For about $220 new, you can pick up a GTX 1660 Super and be just as efficient than the previous generation’s top card (GTX 1080 Ti), which still goes for $400-500 used on eBay. Sure the 1660 Super won’t be as good of a gaming card, and it  makes only about 2/3’s the PPD as the 1080 Ti, but on an energy efficiency metric it holds its own.

The GTX 1650 did pretty good as well, coming in somewhere towards the middle of the pack. It is still much more efficient than the similar market segment cards of the previous generation (GTX 1050), but it is overall hampered by not being able to return work units as quickly to the scientists, who prioritize fast work with bonus points (Quick Return Bonus).

Conclusion

NVIDIA’s entry-level Turing architecture graphics cards perform very well in Folding@Home, both from a performance and an efficiency standpoint. They offer significant gains relative to legacy cards, and can be a good option for a budget Folding@Home build.

Join My Team!

Interested in fighting COVID-19, Cancer, Alzheimer’s, Parkinson’s, and many other diseases with your computer? Please consider downloading Folding@Home and joining Team Nuclear Wessels (54345). See my tutorial here.

Interested in Buying a GTX 1660 or GTX 1650?

Please consider supporting my blog by using one of the below Amazon affiliate search links to find your next card! It won’t cost you anything extra, but will provide me with a small part of Amazon’s profit so I can keep paying for this site.

GTX 1660 Amazon Search Affiliate Link!

GTX 1650 Amazon Search Affiliate Link!

Folding@Home Review: NVIDIA GeForce GTX 1080 Ti

Released in March 2017, Nvidia’s GeForce GTX 1080 Ti was the top-tier card of the Pascal line-up. This is the graphics card that super-nerds and gamers drooled over. With an MSRP of $699 for the base model, board partners such as EVGA, Asus, Gigabyte, MSI, and Zotac (among others) all quickly jumped on board (pun intended) with custom designs costing well over the MSRP, as well as their own takes on the reference design.

GTX 1080 Ti Reference EVGA

EVGA GeForce GTX 1080 Ti – Reference

Three years later, with the release of the RTX 2080 Ti, the 1080 Ti still holds its own, and still commands well over $400 on the used market. These are beastly cards, capable of running most games with max settings in 4K resolutions.

But, how does it fold?

Folding@Home

Folding at home is a distributed computing project originally developed by Stanford University, where everyday users can lend their PC’s computational horsepower to help disease researchers understand and fight things like cancer, Alzheimer’s, and most recently the COVID-19 Coronavirus. User’s computers solve molecular dynamics problems in the background, which help the Folding@Home Consortium understand how proteins “misfold” to cause disease. For computer nerds, this is an awesome way to give (money–>electricity–>computer work–>fighting disease).

Folding at home (or F@H) can be run on both CPUs and GPUs. CPUs provide a good baseline of performance, and certain molecular simulations can only be done here. However, GPUs, with their massively parallel shader cores, can do certain types of single-precision math much faster than CPUs. GPUs provide the majority of the computational performance of F@H.

Geforce GTX 1080 Ti Specs

The 1080 Ti is at the top of Nvidia’s lineup of their 10-series cards.

1080 Ti Specs

With 3584 CUDA Cores, the 1080 Ti is an absolute beast. In benchmarks, it holds its own against the much newer RTX cards, besting even the RTX 2080 and matching the RTX 2080 Super. Only the RTX 2080 Ti is decidedly faster.

Folding@Home Testing

Testing is performed in my old but trusty benchmark machine, running Windows 10 Pro and using Stanford’s V7 Client. The Nvidia graphics driver version was 441.87. Power consumption measurements are taken on the system-level using a P3 Watt Meter at the wall.

System Specs:

  • CPU: AMD FX-8320e
  • Mainboard : Gigabyte GA-880GMA-USB3
  • GPU: EVGA 1080 Ti (Reference Design)
  • Ram: 16 GB DDR3L (low voltage)
  • Power Supply: Seasonic X-650 80+ Gold
  • Drives: 1x SSD, 2 x 7200 RPM HDDs, Blu-Ray Burner
  • Fans: 1x CPU, 2 x 120 mm intake, 1 x 120 mm exhaust, 1 x 80 mm exhaust
  • OS: Win10 64 bit

I did extensive testing of the 1080 Ti over many weeks. Folding@Home rewards donors with “Points” for their contributions, based on how much science is done and how quickly it is returned. A typical performance metric is “Points per Day” (PPD). Here, I have averaged my Points Per Day results out over many work units to provide a consistent number. Note that any given work unit can produce more or less PPD than the average, with variation of 10% being very common. For example, here are five screen shots of the client, showing five different instantaneous PPD values for the 1080 Ti.

 

GTX 1080 Ti Folding@Home Performance

The following plot shows just how fast the 1080 Ti is compared to other graphics cards I have tested. As you can see, with nearly 1.1 Million PPD, this card does a lot of science.

1080 Ti Folding Performance

GTX 1080 Ti Power Consumption

With a board power rating of 250 Watts, this is a power hungry graphics card. Thus, it isn’t surprising to see that power consumption is at the top of the pack.

1080 Ti Folding Power

GTX 1080 Ti Efficiency

Power consumption alone isn’t the whole story. Being a blog about doing the most work possible for the least amount of power, I am all about finding Folding@Home hardware that is highly efficient. Here, efficiency is defined as Performance Out / Power In. So, for F@H, it is PPD/Watt. The best F@H hardware is gear that maximizes disease research (performance) done per watt of power consumed.

Here’s the efficiency plot.

1080 Ti Folding Efficiency

Conclusion

The Geforce GTX 1080 Ti is the fastest and most efficient graphics card that I’ve tested so far for Stanford’s Folding@Home distributed computing project. With a raw performance of nearly 1.1 Million PPD in windows and an efficiency of almost 3500 PPD/Watt, this card is a good choice for doing science effectively.

Stay tuned to see how Nvidia’s latest Turing architecture stacks up.

GTX 460 Graphics Card Review: Is Folding on Ancient Hardware Worth It?

Recently, I picked up an old Core 2 duo build on Ebay for $25 + shipping. It was missing some pieces (Graphics card, drives, etc), but it was a good deal, especially for the all-metal Antec P182 case and included Corsair PSU + Antec 3-speed case fans. So, I figured what the heck, let’s see if this vintage rig can fold!

Antec 775 Purchase

To complement this old Socket 775 build, I picked up a well loved EVGA GeForce GTX 460 on eBay for a grand total of $26.85. It should be noted that this generation of Nvidia graphics cards (based on the Fermi architecture from back in 2010) is the oldest GPU hardware that is still supported by Stanford. It will be interesting to see how much science one of these old cards can do.

GTX 460 Purchase

I supplied a dusty Western Digital 640 Black Hard Drive that I had kicking around, along with a TP Link USB wireless adapter (about $7 on Amazon). The Operating System was free (go Linux!). So, for under $100 I had this setup:

  • Case: Antec P182 Steel ATX
  • PSU: Corsair HX 520
  • Processor: Intel Core2duo E8300
  • Motherboard: EVGA nForce 680i SLI
  • Ram: 2 x 2 GB DDR2 6400 (800 MHz)
  • HDD: Western Digital Black 640GB
  • GPU: EVGA GeForce GTX 460
  • Operating System: Ubuntu Linux 18.04
  • Folding@Home Client: V7

I fired up folding, and after some fiddling I got it running nice and stable. The first thing I noticed was that the power draw was higher than I had expected. Measured at the wall, this vintage folding rig was consuming a whopping 220 Watts! That’s a good deal more than the 185 watts that my main computer draws when folding on a modern GTX 1060. Some of this is due to differences in hardware configuration between the two boxes, but one thing to note is that the older GTX 460 has a TDP of 160 watts, whereas the GTX 1060 has a TDP of only 120 Watts.

Here’s a quick comparison of the GTX 460 vs the GTX 1060. At the time of their release, both of these cards were Nvidia’s baseline GTX model, offering serious gaming performance for a better price than the more aggressive GTX -70 and -80-series variants. I threw a GTX 1080 into the table for good measure.

GTX 460 Spec Comparison

GTX 460 Specification Comparison

The key takeaways here are that six years later, the equivalent graphics card to the GTX 460 was over three and a half times faster while using forty watts less power.

Power Consumption

I typically don’t report power consumption directly, because I’m more interested in optimizing efficiency (doing more work for less power). However, in this case, there is an interesting point to be made by looking at the wattage numbers directly. Namely, the GTX 460 (a mid-range card) uses almost as much power as a modern high-end GTX 1080, and uses seriously more power than the modern GTX 1060 mid-range card. Note: these power consumption numbers must be taken with a grain of salt, because the GTX 460 was installed in a different host system (the Core2 Duo rig) as the other cards, but the resutls are still telling. This is also consistent with the advertised TDP of the GTX 460, which is 40 watts higher than the GTX 1060.

GTX 460 Power Consumption (Wall)

Total System Power Consumption

Folding@Home Results

Folding on the old GTX 460 produced a rough average of 20,000 points per day, with the normal +/- 10% variation in production seen between work units. Back in 2006 when I was making a few hundred PPD on an old Athlon 64 X2 CPU, this would have been a huge amount of points! Nowadays, this is not so impressive. As I mentioned before, the power consumption at the wall for this system was 220 Watts. This yields an efficiency of 20,000 PPD / 220 Watts = 90 PPD/Watt.

Based off the relative performance, one would think the six-year newer GTX 1060 would produce somewhere between 3 and 4 times as many PPD as the older 460 card. This would mean roughly 60-80K PPD. However, my GTX 1060 frequently produces over 300K PPD. This is due to Stanford’s Quick Return Bonus, which essentially rewards donors for doing science quickly. You can read more about this incentive-based points system at Stanford’s website. The gist is, the faster you return a work unit to the scientists, the sooner they can get to developing cures for diseases. Thus, they award you more points for fast work. As the performance plot below shows, this quick return bonus really adds up, so that someone doing 3-4 times more (GTX 1060 vs. GTX 460 linear benchmark performance) results in 15 times more F@H performance.

GTX 460 Performance and Efficiency

Old vs. New Graphics Card Comparison: Folding@Home Efficiency and PPD

This being a blog about energy-conscious computing, I’d be remiss if I didn’t point out just how inefficient the ancient GTX 460 is compared to the newer cards. Due to the relatively high power consumption for a midrange card, the GTX 460 is eighteen times less efficient than the GTX 1060, and a whopping thirty three times less efficient than the GTX 1080.

Conclusion

Stanford eventually drops support for old hardware (anyone remember PS3 folding?), and it might not be long before they do the same for Fermi-based GPUs. Compared with relatively modern GPUs, the GTX 460 just doesn’t stack up in 2020. Now that the 10-series cards are almost four years old, you can often get GTX 1060s for less than $200 on eBay, so if you can afford to build a folding rig around one of these cards, it will be 18 times more efficient and make 15 times more points.

Still, I only paid about $100 total to build this vintage folding@home rig for this experiment. One could argue that putting old hardware to use like this keeps it out of landfills and still does some good work. Additionally, if you ignore bonus points and look at pure science done, the GTX 460 is “only” about 4 times slower than its modern equivalent.

Ultimately, for the sake of the environment, I can’t recommend folding on graphics cards that are many years out of date, unless you plan on using the machine as a space heater to offset heating costs in the winter. More on that later…

Addendum

Since doing the initial testing and outline for this article, I picked up a GTX 480 and a few GTX 980 Ti cards. Here are some updated plots showing these cards added to the mix. The GTX 480 was tested in the Core2 build, and the GTX 980 Ti in my standard benchmark rig (AMD FX-based Socket AM3 system).

Various GPU Power Consumption

GTX 980 and 480 Performance

GTX 980 and 480 Efficiency

I think the conclusion holds: even though the GTX 480 is slightly faster and more efficient than it’s little brother, it is still leaps and bounds worse than the more modern cards. The 980 Ti, being a top-tier card from a few generations back, holds its own nicely, and is almost as efficient as a GTX 1060. I’d say that the 980 Ti is still a relatively efficient card to use in 2020 if you can get one for cheap enough.

AMD Radeon RX 580 8GB Folding@Home Review

Hello again.

Today, I’ll be reviewing the AMD Radeon RX 580 graphics card in terms of its computational performance and power efficiency for Stanford University’s Folding@Home project. For those that don’t know, Folding@Home lets users donate their computer’s computations to support disease research. This consumes electrical power, and the point of my blog is to look at how much scientific work (Points Per Day or PPD) can be computed for the least amount of electrical power consumption. Why? Because in trying to save ourselves from things like cancer, we shouldn’t needlessly pollute the Earth. Also, electricity is expensive!

The Card

AMD released the RX 580 in April 2017 with an MSRP of $229. This is an updated card based on the Polaris architecture. I previously reviewed the RX 480 (also Polaris) here, for those interested. I picked up my MSI-flavored RX 580 in 2019 on eBay for about $120, which is a pretty nice depreciated value. Those who have been following along know that I prefer to buy used video cards that are 2-3 years old, because of the significant initial cost savings, and the fact that I can often sell them for the same as I paid after running Folding for a while.

RX_580

MSI Radeon RX 580

I ran into an interesting problem installing this card, in that at 11 inches long, it was about a half inch too long for my old Raidmax Sagitta gaming case. The solution was to take the fan shroud off, since it was the part that was sticking out ever so slightly. This involved an annoying amount of disassembly, since the fans actually needed to be removed from the heat sink for the plastic shroud to come off. Reattaching the fans was a pain (you need a teeny screw driver that can fit between the fan blade gaps to get the screws by the hub).

RX_580_noShroud

RX 580 with Fan Shroud Removed. Look at those heat pipes! This card has a 185 Watt TDP (Board Power Rating). 

RX_580_Installed

RX 580 Installed (note the masking tape used to keep the little side LED light plate off of the fan)

RX_580_tightFit

Now That’s a Tight Fit (the PCI Express Power Plug on the video card is right up against the case’s hard drive bays)

The Test Setup

Testing was done on my rather aged, yet still able, AMD FX-based system using Stanford’s Folding@Home V7 client. Since this is an AMD graphics card, I made sure to switch the video card mode to “compute” within the driver panel. This optimizes things for Folding@home’s workload (as opposed to games).

Test Setup Specs

  • Case: Raidmax Sagitta
  • CPU: AMD FX-8320e
  • Mainboard : Gigabyte GA-880GMA-USB3
  • GPU: MSI Radeon RX 580 8GB
  • Ram: 16 GB DDR3L (low voltage)
  • Power Supply: Seasonic X-650 80+ Gold
  • Drives: 1x SSD, 2 x 7200 RPM HDDs, Blu-Ray Burner
  • Fans: 1x CPU, 2 x 120 mm intake, 1 x 120 mm exhaust, 1 x 80 mm exhaust
  • OS: Win10 64 bit
  • Video Card Driver Version: 19.10.1

 

Performance and Power

I ran the RX 580 through its paces for about a week in order to get a good feel for a variety of work units. In general, the card produced as high as 425,000 points per day (PPD), as reported by Stanford’s servers. The average was closer to 375K PPD, so I used that number as my final value for uninterrupted folding. Note that during my testing, I occasionally used the machine for other tasks, so you can see the drops in production on those days.

RX 580 Client

Example of Client View – RX 580

RX580 History

RX 580 Performance – About 375K PPD

I measured total system power consumption at the wall using my P3 Watt Meter. The system averaged about 250 watts. That’s on the higher end of power consumption, but then again this is a big card.

Comparison Plots

RX 580 Performance

AMD Radeon RX 580 Folding@Home Performance Comparison

RX 580 Efficiency

AMD Radeon RX 580 Folding@Home Efficiency Comparison

Conclusion

For $120 used on eBay, I was pretty happy with the RX 580’s performance. When it was released, it was directly competing with Nvidia’s GTX 1060. All the gaming reviews I read showed that Team Red was indeed able to beat Team Green, with the RX 580 scoring 5-10% faster than the 1060 in most games. The same is true for Folding@Home performance.

However, that is not the end of the story. Where the Nvidia GTX 1060 has a 120 Watt TDP (Thermal Dissipated Power), AMD’s RX 580 needs 185 Watts. It is a hungry card, and that shows up in the efficiency plots, which take the raw PPD (performance) and divide out the power consumption in watts I am measuring at the wall. Here, the RX 580 falls a bit short, although it is still a healthy improvement over the previous generation RX 480.

Thus, if you care about CO2 emissions and the cost of your folding habits on your wallet, I am forced to recommend the GTX 1060 over the RX 580, especially because you can get one used on eBay for about the same price. However, if you can get a good deal on an RX 580 (say, for $80 or less), it would be a good investment until more efficient cards show up on the used market.

Nvidia Geforce GTX 1050 Folding@Home Quick Review

For today’s review, I’m taking a quick look at a little 75 watt graphics card from a few years ago: the Nvidia GTX 1050 2GB. As far as the now somewhat aged Pascal architecture goes, this one is pretty near the bottom of the pile in terms of performance. But, that means you can also get it at a decent discount. I got mine earlier this year for a mere $75 shipped (the card’s MSRP was about $110 back in October 2016).

GeForce GTX 1050 Purchase

GeForce GTX 1050 Purchase Screen Shot

Geforce_1050_2gb_stats

Nvidia GeForce GTX 1050 2GB Stats

As you can see above, I picked up a beefy version…a big dual fan open cooler design by Gigabyte. This is massive overkill for a card with a meager 75 watt TDP (it doesn’t even have a connection for supplemental PCI Express power).

So, you might ask why I would bother reviewing a card of this type, given that all my testing to date has shown that the higher-end cards are also more efficient. Well, in this case I realized that summer in New England means I run a lot of air conditioning, and any extra wattage my computer uses is ultimately dumped into the house as heat. In the winter that’s a good thing, but in the summer it just makes air conditioners do more work removing that waste heat. By running a low-end card, I hoped to continue contributing to Folding@Home while minimizing the electric bill and environmental impact.

Testing was done in the usual manner, using my standard test rig (Windows 10 running the V7 Client) and measuring power at the wall with a watt meter. Normally this is the point where I put up a bunch of links to those previous articles, or sometimes describe in detail the methods and hardware used. But, today I’m feeling lazy so I’m skipping all that. The key is that I am as consistent as I can be (although now I am on Windows 10 due to Windows 7 approaching end of life). If you would like more info on how I run my tests, just go back a few posts to the 1080 and 1070 testing.

Results:

GTX 1050 Point History

GTX 1050 Point History

The circled region on the plot shows what happens in terms of Points Per Day when I “downgraded” from a 1080 to the 1050. The average PPD of about 150K pales in comparison to the 1080. However, when you think about the fact that this is similar performance to cards like the 300 Watt dual-GPU GTX 690 Ti from 2012 (a $1000 card even back then), things don’t seem so bad.

Here’s how the little vanilla 1050 stacks up against the other cards I have tested:

GTX 1050 Production

GTX 1050 Production

From a performance standpoint, this card isn’t going to win any races. However, as I mentioned above, it is actually pretty good compared to high-end old cards (the only one of which I have tested is the Radeon 7970, which cost $550 back in December 2011 and uses nearly three times the power as the 1050.

Efficiency is where things get interesting:

GTX 1050 Efficiency

GTX 1050 Efficiency

Here, you can see that the GTX 1050 has actually leapfrogged up the chart by one place, and is actually slightly more efficient than my copy of an AMD RX 480, which uses the 2016 Polaris architecture–something supposedly designed for efficiency according to AMD. Still, for about $10 more on eBay, you can get a used 1050 Ti, which shows a marked efficiency improvement as well as performance improvement vs the 1050. In my case, I found that both the GTX 1050 and GTX 1050 Ti drew the same amount of power at the wall (140 watts for the whole system). Thus, for my summertime folding, I would have been better off keeping the old GTX 1050 Ti since it does more work for exactly the same amount of wall power consumption.

Conclusion

Nvidia’s GTX 1050 is a small, cheap card that most people use for casual gaming. As a compute card for molecular dynamics, the best things about it is the price of acquiring one used ($75 or less as of 2019), the small size, and no need for an external PCI-Express power connector. Thus, people with low-end desktops and small form-factor PC’s could probably add one of these and without much trouble. I’m going to miss how easy this thing fits in my computer case…

GTX 1050 Easy Fit.jpg

At the end of the day, it was a slow card, making only 150K PPD. The efficiency wasn’t that good either. If you’re going to burn the electricity doing charitable science, it would be good to get more science per watt than the 1050 provides. Given the meager price difference in the used market of this card vs. it’s big Titanium edition brother, go with the Ti (exact same environmental impact but more performance), or better yet, a GTX 1060.

Folding@Home: Nvidia GTX 1080 Review Part 3: Memory Speed

In the last article, I investigated how the power limit setting on an Nvidia Geforce GTX 1080 graphics card could affect the card’s performance and efficiency for doing charitable disease research in the Folding@Home distributed computing project. The conclusion was that a power limit of 60% offers only a slight reduction in raw performance (Points Per Day), but a large boost in energy efficiency (PPD/Watt). Two articles ago, I looked at the effect of GPU core clock. In this article, I’m experimenting with a different variable. Namely, the memory clock rate.

The effect of memory clock rate on video games is well defined. Gamers looking for the highest frame rates typically overclock both their graphics GPU and Memory speeds, and see benefits from both. For computation projects like Stanford University’s Folding@Home, the results aren’t as clear. I’ve seen arguments made both ways in the hardware forums. The intent of this article is to simply add another data point, albeit with a bit more scientific rigor.

The Test

To conduct this experiment, I ran the Folding@Home V7 GPU client for a minimum of 3 days continuously on my Windows 10 test computer. Folding@Home points per day (PPD) numbers were taken from Stanford’s Servers via the helpful team at https://folding.extremeoverclocking.com.  I measured total system power consumption at the wall with my P3 Kill A Watt meter. I used the meter’s KWH function to capture the total energy consumed, and divided out by the time the computer was on in order to get an average wattage value (thus eliminating a lot of variability). The test computer specs are as follows:

Test Setup Specs

  • Case: Raidmax Sagitta
  • CPU: AMD FX-8320e
  • Mainboard : Gigabyte GA-880GMA-USB3
  • GPU: Asus GeForce 1080 Turbo
  • Ram: 16 GB DDR3L (low voltage)
  • Power Supply: Seasonic X-650 80+ Gold
  • Drives: 1x SSD, 2 x 7200 RPM HDDs, Blu-Ray Burner
  • Fans: 1x CPU, 2 x 120 mm intake, 1 x 120 mm exhaust, 1 x 80 mm exhaust
  • OS: Win10 64 bit
  • Video Card Driver Version: 372.90

I ran this test with the memory clock rate at the stock clock for the P2 power state (4500 MHz), along with the gaming clock rate of 5000 MHz and a reduced clock rate of 4000 MHz. This gives me three data points of comparison. I left the GPU core clock at +175 MHz (the optimum setting from my first article on the 1080 GTX) and the power limit at 100%, to ensure I had headroom to move the memory clock without affecting the core clock. I verified I wasn’t hitting the power limit in MSI Afterburner.

*Update. Some people may ask why I didn’t go beyond the standard P0 gaming memory clock rate of 5000 MHz (same thing as 10,000 MHz double data rate, which is the card’s advertised memory clock). Basically, I didn’t want to get into the territory where the GDDR5’s error checking comes into play. If you push the memory too hard, there can be errors in the computation but work units can still complete (unlike a GPU core overclock, where work units will fail due to errors). The reason is the built-in error checking on the card memory, which corrects errors as they come up but results in reduced performance. By staying away from 5000+ MHz territory on the memory, I can ensure the relationship between performance and memory clock rate is not affected by memory error correction.

1080 Memory Boost Example

Memory Overclocking Performed in MSI Afterburner

Tabular Results

I put together a table of results in order to show how the averaging was done, and the # of work units backing up my +500 MHz and -500 MHz data points. Having a bunch of work units is key, because there is significant variability in PPD and power consumption numbers between work units. Note that the performance and efficiency numbers for the baseline memory speed (+0 MHz, aka 4500 MHz) come from my extended testing baseline for the 1080 and have even more sample points.

Geforce 1080 PPD Production - Ram Study

Nvidia GTX 1080 Folding@Home Production History: Data shows increased performance with a higher memory speed

Graphic Results

The following graphs show the PPD, Power Consumption, and Efficiency curves as a function of graphics card memory speed. Since I had three points of data, I was able to do a simple three-point-curve linear trendline fit. The R-squared value of the trendline shows how well the data points represent a linear relationship (higher is better, with 1 being ideal). Note that for the power consumption, the card seems to have used more power with a lower memory clock rate than the baseline memory clock. I am not sure why this is…however, the difference is so small that it is likely due to work unit variability or background tasks running on the computer. One could even argue that all of the power consumption results are suspect, since the changes are so small (on the order of 5-10 watts between data points).

Geforce 1080 Performance vs Ram Speed

Geforce 1080 Power vs Ram Speed

Geforce 1080 Efficiency vs Ram Speed

Conclusion

Increasing the memory speed of the Nvidia Geforce GTX 1080 results in a modest increase in PPD and efficiency, and arguably a slight increase in power consumption. The difference between the fastest (+500 MHz) and slowest (-500 MHz) data points I tested are:

PPD: +81K PPD (11.5%)

Power: +9.36 Watts (3.8%)

Efficiency: +212.8 PPD/Watt (7.4%)

Keep in mind that these are for a massive difference in ram speed (5000 MHz vs 4000 MHz).

Another way to look at these results is that underclocking the graphics card ram in hopes of improving efficiency doesn’t work (you’ll actually lose efficiency). I expect this trend will hold true for the rest of the Nvidia Pascal series of cards (GTX 10xx), although so far my testing of this has been limited to this one card, so your mileage may vary. Please post any insights if you have them.

NVIDIA GEFORCE GTX 1080 Folding@Home Review (Part 1)

Intro

It’s hard to believe that the Nvidia GTX 1080 is almost three years old now, and I’m just getting around to writing a Folding@Home review of it. In the realm of graphics cards, this thing is legendary, and only recently displaced from the enthusiast podium by Nvidia’s new RTX series of cards. The 1080 was Nvidia’s top of the line gaming graphics card (next to the Ti edition of course), and has been very popular for both GPU coin mining and cancer-curing (or at least disease research for Stanford University’s charitable distributed computing project: Folding@Home). If you’ve been following along, you know it’s that second thing that I’m interested in. The point of this review is to see just how well the GTX 1080 folds…and by well, I mean not just raw performance, but also energy efficiency.


Quick Stats Comparison

I threw together a quick table to give you an idea of where the GTX 1080 stacks up (I left the newer RTX cards and the older GTX 9-series cards off of here because I’m lazy…

Nvidia Pascal Cards

Nvidia Pascal Family GPU Comparison

As you can see, the GTX 1080 is pretty fast, eclipsed only by the GTX 1080 Ti (which also has a higher Thermal Design Power, suggesting more electricity usage). From my previous articles, we’ve seen that the more powerful cards tend to do work more efficiency, especially if they are in the same TDP bracket. So, the 1080 should be a better folder (both in PPD and PPD/Watt efficiency) than the 1070 Ti I tested last time.

Test Card: ASUS GeForce GTX 1080 Turbo

As with the 1070 Ti, I picked up a pretty boring flavor of a 1080 in the form of an Asus turbo card. These cards lack back plates (which help with circuit board rigidity and heat dissipation) and use cheap blower coolers, which suck in air from a single centrifugal fan on the underside and blow it out the back of the case (keeping the hot air from building up in the case). These are loud, and tend to run hotter than open-fan coolers, so overclocking and boost clocks are limited compared to aftermarket designs. However, like Nvidia’s own Founder’s Edition reference cards, this reference design provides a good baseline for a 1080’s minimum performance.

ASUS GeForce GTX 1080 Turbo

ASUS GeForce GTX 1080 Turbo

The new 1080 looks strikingly similar to the 1070 Ti…Asus is obviously reusing the exact same cooler since both cards have a 180 Watt TDP.

Asus GTX 1080 and 1070 Ti

Asus GTX 1080 and 1070 Ti (which one is which?)

Test Environment

Like most of my previous graphics card testing, I put this into my AMD FX-Based Test System. If you are interested in how this test machine does with CPU folding, you can read about it here. Testing was done using Stanford’s Folding@Home V7 Client (version 7.5.1) in Windows 10. Points Per Day (PPD) production was collected from Stanford’s servers. Power measurements were done with a P3 Kill A Watt Meter (taken at the wall, for a total-system power profile).

Test Setup Specs

  • Case: Raidmax Sagitta
  • CPU: AMD FX-8320e
  • Mainboard : Gigabyte GA-880GMA-USB3
  • GPU: Asus GeForce 1080 Turbo
  • Ram: 16 GB DDR3L (low voltage)
  • Power Supply: Seasonic X-650 80+ Gold
  • Drives: 1x SSD, 2 x 7200 RPM HDDs, Blu-Ray Burner
  • Fans: 1x CPU, 2 x 120 mm intake, 1 x 120 mm exhaust, 1 x 80 mm exhaust
  • OS: Win10 64 bit
  • Video Card Driver Version: 372.90

Video Card Configuration – Optimize for Performance

In my previous articles, I’ve shown how Nvidia GPUs don’t always automatically boost their clock rates when running Folding@home (as opposed to video games or benchmarks). The same is true of the GTX 1080. It sometimes needs a little encouragement in order to fold at the maximum performance. I overclocked the core by 175 MHz and increased the power limit* by 20% in MSI afterburner using similar settings to the GTX 1070. These values were shown to be stable after 2+ weeks of testing with no dropped work units.

*I also experimented with the power limit at 100% and I saw no change in card power consumption. This makes sense…folding is not using 100% of the GPU. Inspection of the MSI afterburner plots shows that while folding, the card does not hit the power limit at either 100% or 120%. I will have to reduce the power limit to get the card to throttle back (this will happen in part 2 of this article).

As with previous cards, I did not push the memory into its performance zone, but left it at the default P2 (low-power) state clock rate. The general consensus is that memory clock does not significantly affect folding@home, and it is better to leave the power headroom for the core clock, which does improve performance. As an interesting side-note, the memory clock on this thing jumps up to 5000 Mhz (effective) in benchmarks. For example, see the card’s auto-boost settings when running Heaven:

1080 Benchmark Stats

Nvidia GeForce GTX 1080 – Boost Clocks (auto) in Heaven Benchmark

Testing Overview

For most of my tests, I just let the computer run folding@home 24/7 for a couple of days and then average the points per day (PPD) results from Stanford’s stats server. Since the GTX 1080 is such a popular card, I decided to let it run a little longer (a few weeks) to get a really good sampling of results, since PPD can vary a lot from work unit to work unit. Before we get into the duration results, let’s do a quick overview of what the Folding@home environment looks like for a typical work unit.

The following is an example screen shot of the display from the client, showing an instantaneous PPD of about 770K, which is very impressive. Here, it is folding on a core 21 work unit (Project 14124).

F@H Client 1080

Folding@Home V7 Client – GeForce GTX 1080

MSI Afterburner is a handy way to monitor GPU stats. As you can see, the GPU usage is hovering in the low 80% region (this is typical for GPU folding in Windows. Linux can use a bit more of the GPU for a few percentage points more PPD). This Asus card, with its reference blower cooler, is running a bit warm (just shy of 70 degrees C), but that’s well within spec. I had the power limit at 120%, but the card is nowhere near hitting that…the power limit seems to just peak above 80% here and there.

GTX 1080 MSI Afterburner

GTX 1080 stats while folding.

Measuring card power consumption with the driver shows that it’s using about 150 watts, which seems about right when compared to the GPU usage and power % graphs. 100% GPU usage would be ideal (and would result in a power consumption of about 180 watts, which is the 1080’s TDP).

In terms of card-level efficiency, this is 770,000 PPD / 150 Watts = 5133 PPD/Watt.

Power Draw (at the card)

Nvidia Geforce GTX 1080 – Instantaneous Power Draw @ the Card

Duration Testing

I ran Folding@Home for quite a while on the 1080. As you can see from this plot (courtesy of https://folding.extremeoverclocking.com/), the 1080 is mildly beating the 1070 Ti. It should be noted that the stats for the 1070 Ti are a bit low in the left-hand side of the plot, because folding was interrupted a few times for various reasons (gaming). The 1080 results were uninterrupted.

1080 Production History

Geforce GTX 1080 Production History

Another thing I noticed was the amount of variation in the results. Normal work unit variation (at least for less powerful cards) is around 10-20 percent. For the GTX 1080, I saw swings of 200K PPD, which is closer to 30%. Check out that one point at 875K PPD!

Average PPD: 730K PPD

I averaged the PPD over two weeks on the GTX 1080 and got 730K PPD. Previous testing on the GTX 1070 Ti (based on continual testing without interruptions) showed an average PPD of 700K. Here is the plot from that article, reproduced for convenience.

Nvidia GTX 1070 Ti Time History

Nvidia GTX 1070 Ti Folding@Home Production Time History

I had expected my GTX 1080 to do a bit better than that. However, it only has about 5% more CUDA cores than the GTX 1070 Ti (2560 vs 2438). The GTX 1080’s faster memory also isn’t an advantage in Folding@Home. So, a 30K PPD improvement for the 1080, which corresponds to about a 4.3% faster, makes sense.

System Average Power Consumption: 240 Watts @ the Wall

I spot checked the power meter (P3 Kill A Watt) many times over the course of folding. Although it varies with work unit, it seemed to most commonly use around 230 watts. Peek observed wattage was 257, and minimum was around 220. This was more variation than I typically see, but I think it corresponds with the variation in PPD I saw in the performance graph. It was very tempting to just say that 230 watts was the number, but I wasn’t confident that this was accurate. There was just too much variation.

In order to get a better number, I reset the Kill-A-Watt meter (I hadn’t reset it in ages) and let it log the computer’s usage over the weekend. The meter keeps track of the total kilowatt-hours (KWH) of energy consumed, as well as the time period (in hours) of the reading. By dividing the energy by time, we get power. Instead of an instantaneous power (the eyeball method), this is an average power over the weekend, and is thus a compatible number with the average PPD.

The end result of this was 17.39 KWH consumed over 72.5 hours. Thus, the average power consumption of the computer is:

17.39/72.5 (KWH/H) * 1000 (Watts/KW) = about 240 Watts (I round a bit for convenience in reporting, but the Excel sheet that backs up all my plots is exact)

This is a bit more power consumed than the GTX 1070 Ti results, which used an average of 225 watts (admittedly computed by the eyeball method over many days, but there was much less variation so I think it is valid). This increased power consumption of the GTX 1080 vs. the 1070 Ti is also consistent with what people have seen in games. This Legit Reviews article shows an EVGA 1080 using about 30 watts more power than an EVGA 1070 Ti during gaming benchmarks. The power consumption figure is reproduced below:

LegitReviews_power-consumption

Modern Graphics Card Power Consumption. Source: Legit Reviews

This is a very interesting result. Even though the 1080 and the 1070 Ti have the same 180 Watt TDP, the 1080 draws more power, both in folding@home and in gaming.

System Computational Efficiency: 3044 PPD/Watt

For my Asus GeForce GTX 1080, the folding@home efficiency is:

730,000 PPD / 240 Watts = 3044 PPD/Watt.

This is an excellent score. Surprisingly, it is slightly less than my Asus 1070 Ti, which I found to have an efficiency of 3126 PPD/Watt. In practice these are so close that it just could be attributed to work unit variation. The GeForce 1080 and 1070 Ti are both extremely efficient cards, and are good choices for folding@home.

Comparison plots here:

GeForce 1080 PPD Comparison

GeForce GTX 1080 Folding@Home PPD Comparison

GeForce 1080 Efficiency Comparison

GeForce GTX 1080 Folding@Home Efficiency Comparison

Final Thoughts

The GTX 1080 is a great card. With that said, I’m a bit annoyed that my GTX 1080 didn’t hit 800K PPD like some folks in the forums say theirs do (I bet a lot of those people getting 800K PPD use Linux, as it is a bit better than Windows for folding). Still, this is a good result.

Similarly, I’m annoyed that the GTX 1080 didn’t thoroughly beat my 1070 Ti in terms of efficiency. The results are so close though that it’s effectively the same. This is part one of a multi-part review, where I tuned the card for performance. In the next article, I plan to go after finding a better efficiency point for running this card by experimenting with reducing the power limit. Right now I’m thinking of running the card at 80% power limit for a week, and then at 60% for another week, and reporting the results. So, stay tuned!

Nvidia GeForce GTX 1070 Ti Folding@Home Review

In an effort to make as much use of the colder months in New England as I can, I’m running tons of Stanford University’s Folding@Home on my computer to do charitable science for disease research while heating my house. In the last article, I reviewed a slightly older AMD card, the RX 480, to determine its performance and efficiency running Folding@Home. Today, I’ll be taking a look at one of the favorite cards from Nvidia for both folding and gaming: The 1070 Ti.

The GeForce GTX 1070 Ti was released in November 2017, and sits between the 1070 and 1080 in terms of raw performance. As of February 2019, the 1070 Ti can be for a deep discount on the used market, now that the RTX 20xx series cards have been released. I got my Asus version on eBay for $250.

Based on Nvidia’s 14nm Pascal architecture, the 1070 Ti has 2432 CUDA cores and 8 GB of GDDR5 memory, with a memory bandwidth of 256 GB/s. The base clock rate of the GPU is 1607 MHz, although the cards automatically boost well past the advertised boost clock of 1683 Mhz. Thermal Design Power (TDP) is 180 Watts.

The 3rd party Asus card I got is nothing special. It appears to be a dual-slot reference design, and uses a blower cooler to exhaust hot air out the back of the case. It requires one supplemental 8-pin PCI-E Power connection.

IMG_20190206_185514342

ASUS GeForce GTX 1070 Ti

One thing I will note about this card is it’s length. At 10.5 inches (which is similar to many NVidia high-end cards), it can be a bit problematic to fit in some cases. I have a Raidmax Sagitta mid-tower case from way back in 2006, and it fits, but barely. I had the same problem with the EVGA GeForce 1070 I reviewed earlier.

IMG_20190206_190210910_TOP

ASUS GTX 1070 Ti – Installed.

Test Environment

Testing was done in Windows 10 on my AMD FX-based system, which is old but holds pretty well, all things considered. You can read more on that here. The system was built for both performance and efficiency, using AMD’s 8320e processor (a bit less power hungry than the other 8-core FX processors), a Seasonic 650 80+ Gold Power Supply, and 8 GB of low voltage DDR3 memory. The real key here, since I take all my power measurements at the wall with a P3 Kill-A-Watt meter, is that the system is the same for all of my tests.

The Folding@Home Client version is 7.5.1, running a single GPU slot with the following settings:

GPU Slot Options

GPU Slot Options for Maximum PPD

These settings tend to result in a slighter higher points per day (PPD), because they request large, advanced work units from Stanford.

Initial Test Results

Initial testing was done on one of the oldest drivers I could find to support the 1070 Ti (driver version 388.13). The thought here was that older drivers would have less gaming optimizations, which tend to hurt performance for compute jobs (unlike AMD, Nvidia doesn’t include a compute mode in their graphics driver settings).

Unfortunately, the best Nvidia driver for the non-Ti GTX 10xx cards (372.90) doesn’t work with the 1070 Ti, because the Ti version came out a few months later than the original cards. So, I was stuck with version 388.13.

Nvidia 1070 TI Baseline Clocks

Nvidia GTX 1070 Ti Monitoring – Baseline Clocks

I ran F@H for three days using the stock clock rate of 1823 MHz core, with the memory at 3802 MHz. Similar to what I found when testing the 1070, Folding@Home does not trigger the card to go into the high power (max performance) P0 state. Instead, it is stuck in the power-saving P2 state, so the core and memory clocks do not boost.

The PPD average for three days when folding at this rate was 632,380 PPD. Checking the Kill-A-Watt meter over the course of those days showed an approximate average system power consumption of 220 watts. Interestingly, this is less power draw than the GTX 1070 (which used 227 watts, although that was with overclocking + the more efficient 372.90 driver). The PPD average was also less than the GTX 1070, which had done about 640,000 PPD. Initial efficiency, in PPD/Watt, was thus 2875 (compared to the GTX 1070’s 2820 PPD/Watt).

The lower power consumption number and lower PPD performance score were a bit surprising, since the GTX 1070 TI has 512 more CUDA cores than the GTX 1070. However, in my previous review of the 1070, I had done a lot of optimization work, both with overclocking and with driver tuning. So, now it was time to do the same to the 1070 Ti.

Tuning the Card

By running UNIGINE’s Heaven video game benchmark in windowed mode, I was able to watch what the card did in MSI afterburner. The core clock boosted up to 1860 MHz (a modest increase from the 1823 base clock), and the memory went up to 4000 MHz (the default). I tried these overclocking settings and saw only a modest increase in PPD numbers. So, I decided to push it further, despite the Asus card having only a reference-style blower cooler. From my 1070 review, I found I was able to fold nice and stable with a core clock of 2012 MHz and a memory clock of 3802 MHz. So, I set up the GTX 1070 Ti with those same settings. After running it for five days, I pushed the core a little higher to 2050 Mhz. A few days later, I upgraded the driver to the latest (417.71).

Nvidia 1070 TI OC

Nvidia GTX 1070 Ti Monitoring – Overclocked

With these settings, I did have to increase the fan speed to keep the card below 70 degrees Celsius. Since the Asus card uses a blower cooler, it was a bit loud, but nothing too crazy. Open-air coolers with lots of heat pipes and multiple fans would probably let me push the card higher, but from what I’d read, people start running into stability problems at core clocks over 2100 Mhz. Since the goal of Folding@home is to produce reliable science to help Stanford University fight disease, I didn’t want to risk dropping a work unit due to an unstable overclock.

Here’s the production vs. time history from Stanford’s servers, courtesy of https://folding.extremeoverclocking.com/

Nvidia GTX 1070 Ti Time History

Nvidia GTX1070 Ti Folding@Home Production Time History

As you can see below, the overclock helped improve the performance of the GTX 1070 Ti. Using the last five days worth of data points (which has the graphics driver set to 417.71 and the 2050 MHz core overclock), I got an average PPD of 703,371 PPD with a power consumption at the wall of 225 Watts. This gives an overall system efficiency of 3126 PPD/Watt.

Finally, these results are starting to make more sense. Now, this card is outpacing the GTX 1070 in terms of both PPD and energy efficiency. However, the gain in performance isn’t enough to confidently say the card is doing better, since there is typically a +/- 10% PPD difference depending on what work unit the computer receives. This is clear from the amount of variability, or “hash”, in the time history plot.

Interestingly, the GTX 1070 Ti it is still using about the same amount of power as the base model GTX 1070, which has a Thermal Design Power of 150 Watts, compared to the GTX 1070 Ti’s TDP of 180 Watts. So, why isn’t my system consuming 30 watts more at the wall than it did when equipped with the base 1070?

I suspect the issue here is that the drivers available for the 1070 Ti are not as good for folding as the 372.90 driver for the non-Ti 10-series Nvidia cards. As you can see from the MSI Afterburner screen shots above, GPU Usage on the GTX 1070 Ti during folding hovers in the 80-90% range, which is lower than the 85-93% range seen when using the non-Ti GTX 1070. In short, folding on the 1070 Ti seems to be a bit handicapped by the drivers available in Windows.

Comparison to Similar Cards

Here are the Production and Efficiency Plots for comparison to other cards I’ve tested.

GTX 1070 Ti Performance Comparison

GTX 1070 Ti Performance Comparison

GTX 1070 Ti Efficiency Comparison

GTX 1070 Ti Efficiency Comparison

Conclusion

The Nvidia GTX 1070 Ti is a very good graphics card for running Folding@Home. With an average PPD of 703K and a system efficiency of 3126 PPD/Watt, it is the fastest and most efficient graphics card I’ve tested so far. As far as maximizing the amount of science done per electricity consumed, this card continues the trend…higher-end video cards are more efficient, despite the increased power draw.

One side note about the GTX 1070 Ti is that the drivers don’t seem as optimized as they could be. This is a known problem for running Folding@Home in Windows. But, since the proven Nvidia driver 372.90 is not available for the Ti-flavor of the 1070, the hit here is more than normal. On the used market in 2019, you can get a GTX 1070 for $200 on ebay, whereas the GTX 1070 Ti’s go for $250. My opinion is that if you’re going to fold in Windows, a tuned GTX 1070 running the 372.90 driver is the way to go.

Future Work

To fully unlock the capability of the GTX 1070 Ti, I realized I’m going to have to switch operating systems. Stay tuned for a follow-up article in Linux.