Tag Archives: PPD

New Folding@Home Benchmark Machine: It’s RYZEN TIME!

Folding@Home, the distributed computing project that fights diseases such as COVID-19 and cancer, has hit an all-time high in popularity. I’m stunned to find that my blog is now getting more views every day than it did every month last year. With that said, this is a perfect opportunity to reach out and see if all the new donors are interested in tuning their computers for efficiency, to save a little on power, lighten the burden on your wallet, and hopefully produce nearly the same amount of science. If this sounds interesting to you, let me know in the comments below!

In my last post, I noted that the latest generation of graphics cards are starting to push the limits of what my primary GPU Folding@Home benchmark rig can do. That computer is based on an 11-year-old chipset (AMD 880), and only supports PCI-Express 2.0. In order for me to keep testing modern fast graphics cards in Windows 10, I wanted to make sure that PCI-Express slot bandwidth wasn’t going to artificially bottleneck me.

So, without further ado, let me present the new, re-built Folding@Home rig, SAGITTA:

Sagitta Desktop

I’ve (re)created a monster!

This build leverages the Raidmax Sagitta case that I’ve had since 2006. This machine has hosted multiple builds (Pentium D 805, Core 2 Duo e8600, Core 2 Quad Q6600, Phenom II X6 1100T, and the most recent FX-8320e Bulldozer). There have been too many graphics cards to count, but the latest one (Nvidia GTX 1650 by Zotac) was carried over for some continuity testing. The case fans and power supply (initially) were also the same since the previous FX build (they aren’t the same ones from back in 2006…those got loud and died long ago). I also kept my Blu-Ray drive and 3.5 inch card reader. That’s where the similarities end. Here is a specs comparison:

Sagitta Rebuild Benchmark Machine Specs

  • Note I ended up updating the power supply to the one shown in the table. More on that below…

System Power Consumption

Initially, the power consumption at idle of the new Ryzen 9 build, measured with my P3 Kill A Watt Meter, was 86 watts. The power consumption while running GPU Folding was 170 watts (and the all-core CPU folding was over 250 watts, but that’s another article entirely).

Using the same Nvidia GeForce GTX 1650 graphics card, these idle and GPU folding power numbers were unfortunately higher than the old benchmark machine, which came in at 70 watts idle and 145 watts load. This is likely due to the overkill hardware that I put into the new rig (X570 motherboards alone are known to draw twice the power of a more normal board). The system’s power consumption difference of 25 watts while folding was especially problematic for my efficiency testing, since new plots compared to graphics cards tested on the old benchmark machine would not be comparable.

To solve this, I could either:

A: Use a 25 watt offset to scale the new GPU F@H efficiency plots

B: Do nothing and just have less accurate efficiency comparisons to previous tests

C: Reduce the power consumption of the new build so that it matches the old one

This being a blog about energy efficiency, I decided to go with Option C, since that’s the one that actually helps the environment. Lets see if we can trim the fat off of this beast of a computer!

Efficiency Boost #1: Power Supply Upgrade

The first thing I tried was to upgrade the power supply. As noted here, the power supply’s efficiency rating is a great place to start when building an energy efficient machine. My old Seasonic X-650 is a very good power supply, and caries an 80+ Gold rating. Still, things have come a long way, and switching to an 80+ Titanium PSU can gain a few efficiency percentage points, especially at low loads.

80+ Table

80+ Efficiency Table

With that 3-5% efficiency boost in mind, I picked up a new Seasonic 750 Watt Prime 80+ Titanium modular power supply. At $200, this PSU isn’t cheap, but it provides a noticeable efficiency improvement at both idle and load. Other nice features were the additional 100 watts of capacity, and the fact that it supported my new motherboard’s dual pin (8 + 4) CPU aux power connection. That extra 4-pin isn’t required to make the X570 board work, but it does allow for more overclocking headroom.

Disclaimer: Before we get into it, I should note that these power readings are “eyeball” readings, taken by glancing at the watt meter and trying to judge the average usage. The actual number jumps around a bit (even at idle) as the computer executes various background tasks. I’d say the measurement precision on any eyeball watt meter readings is +/- 5 watts, so take the below with a grain of salt. These are very small efficiency improvements that are difficult to measure, and your mileage may vary. 

After upgrading the power supply, idle power dropped an impressive 10 watts, from 86 watts to 76. This is an awesome 11% efficiency improvement. This might be due to the new 80+ Titanium power supply having an efficiency target at very low loads (90% efficiency at 10% load), whereas the old 80+ Gold spec did not have a low load efficiency requirement. Thus, even though I used a large 750 watt power supply, the machine can still remain relatively efficient at idle.

Under moderate load (GPU folding), the new 80+ titanium PSU provided a 4% efficiency improvement, dropping the power consumption from 170 watts to 163. This is more in line with expectations.

Efficiency Boost #2: Processor Underclock / Undervolt

Thanks to video gaming mentality, enthusiast-grade desktop processors and motherboards are tuned out of the box for performance. We’re talking about blistering fast, competition-crushing benchmark scores. For most computing tasks (such as running Folding@Home on a graphics card), this aggressive CPU behavior is wasting electricity while offering no discernible performance benefit. Despite what my kid’s shirt says, we need to reel these power hungry CPUs in for maximum GPU folding efficiency.

Never Slow Down

Kai Says: Never Slow Down

One way to improve processor efficiency is to reduce the clock rate and associated voltage. I’d previously investigated this here. It takes exponentially more voltage to support high frequencies, so just by dropping the clock rate by 100 MHz or so, you can lower the voltage a bunch and save on power.

With the advent of processors that up-clock and up-volt themselves (as well as going in the other direction), manual tuning can be a bit more difficult. It’s far easier to first try the automatic settings, to see if some efficiency can be gained.

But wait, this is a GPU folding benchmark rig? Why does the CPU’s frequency and power settings matter?

For GPU folding with an Nvidia graphics card, one CPU core is fully loaded per GPU slot in order to “feed” the card. This is because Nvidia’s implementation of open CL support using a polling (checking) method. In order to keep the graphics card chugging along, the CPU constantly checks on the GPU to see if it needs any data. This polling loop is not efficient and burns unnecessary power. You can read more about it here: https://foldingforum.org/viewtopic.php?f=80&t=34023. In contrast, AMD’s method (interrupts) is a much more graceful implementation that doesn’t lock up a CPU core.

The constant polling loop drives modern gaming-oriented processors to clock up their cores unnecessarily. For the most part, the GPU does not need work at every waking moment. To save power, we can turn down the frequency, so that the CPU is not constantly knocking on the GPU’s metaphorical door.

To do this, I disabled AMD’s Core Performance Boost (CPB) in the AMD Overclocking section of the BIOS (same thing as Intel’s Turbo Boost). This caps the processor speed at the base maximum clock rate (3.5 GHz for the Ryzen 9 3950x), and also eliminates any high voltage values required to support the boost clocks.

Success! GPU folding total system power consumption is now much lower. With less superfluous power draw from the CPU, the wattage is much more comparable to the old Bulldozer rig.

Ryzen 9 3950x Power Reduction Table

It is interesting that idle power consumption came down as well. That wasn’t expected. When the computer isn’t doing anything, the CPU cores should be down-clocked / slept out. Perhaps my machine was doing something in the background during the earlier tests, thus throwing the results off. More investigation is needed.

GPU Benchmark Consistency Check

I fired up GPU folding on the Nvidia GeForce GTX 1650, a card that I have performance data for from my previous benchmark desktop. After monitoring it for a week, the Folding@Home Points Per Day performance was so similar to the previous results that I ended up using the same value (310K PPD) as the official estimate for the 1650’s production. This shows that the old benchmark rig was not a bottleneck for a budget card like the GeForce GTX 1650.

Using the updated system power consumption of nominally 140 watts (vs 145 watts of the previous benchmark machine), the efficiency plots (PPD/Watt) come out very nearly the same. I typically consider power measurements of + / – 5 watts to be within the measurement accuracy of my eyeball on the watt meter anyway, due to normal variations as the system runs. The good news is that even with this variation, it doesn’t change the conclusion of the figure (in terms of graphics card efficiency ranking).

GTX 1650 Efficiency on Ryzen 9

* Benchmark performed on updated Ryzen 9 build

Conclusion

I have a new 16-core beast of a benchmark machine. This computer wasn’t built exclusively for efficiency, but after a few tweaks, I was able to improve energy efficiency at low CPU loads (such as Windows Idle + GPU Folding).

For most of the graphics cards I have tested so far, the massive upgrade in system hardware will not likely affect performance or efficiency results. Very fast cards, such as the 1080 Ti, might benefit from the new benchmark rig’s faster hardware, especially that PCI-Express 4.0 x16 graphics card slot. Most importantly, future tests of blistering fast graphics cards (2080 Ti, 3080 Ti, etc) will probably not be limited by the benchmark machine’s background hardware.

Oh, I can also now encode my backup copies of my blu-ray movies at 40 fps in H.265 in Handbrake (old speed was 6.5 fps on the FX-8320e). That’s a nice bonus too.

Efficiency Note (for GPU Folding@Home Users)

Disabling the automatic processor frequency and voltage scaling (Turbo Boost / Core Performance Boost) didn’t have any effect on the PPD being generated by the graphics card. This makes sense; even relatively slow 2.0 GHz CPU cores are still fast enough to feed most GPUs, and my modern Ryzen 9 at 3.5 GHz is no bottleneck for feeding the 1650. By disabling CPB, I shaved 23 watts off of the system’s power consumption for literally no performance impact while running GPU folding. This is a 16 percent boost in PPD/Watt efficiency, for free!

This also dropped CPU temps from 70 degrees C to 55, and resulted in a lower CPU cooler fan speed / quieter machine. This should promote longevity of the hardware, and reduce how much my computer fights my air conditioning in the summer, thus having a compounding positive effect on my monthly electric bill.

Future Articles

  • Re-Test the 1080 Ti to see if a fast graphics card makes better use of the faster PCI-Express bus on the AM4 build
  • Investigate CPU folding efficiency on the Ryzen 9 3950x

 

Shout out to the helpers…Kai and Sam

Folding@Home on Turing (NVidia GTX 1660 Super and GTX 1650 Combined Review)

Hey everyone. Sorry for the long delay (I have been working on another writing project, more on that later…). Recently I got a pair of new graphics cards based on Nvidia’s new Turing architecture. This has been advertised as being more efficient than the outgoing Pascal architecture, and is the basis of the popular RTX series Geforce cards (2060, 2070, 2080, etc). It’s time to see how well they do some charitable computing, running the now world-famous disease research distributed computing project Folding@Home.

Since those RTX cards with their ray-tracing cores (which does nothing for Folding) are so expensive, I opted to start testing with two lower-end models: the GeForce GTX 1660 Super and the GeForce GTX 1650.

 

These are really tiny cards, and should be perfect for some low-power consumption summertime folding. Also, today is the first time I’ve tested anything from Zotac (the 1650). The 1660 super is from EVGA.

GPU Specifications

Here’s a quick table I threw together comparing these latest Turing-based GTX 16xx series cards to the older Pascal lineup.

Turing GPU Specs

It should be immediately apparent that these are very low power cards. The GTX 1650 has a design power of only 75 watts, and doesn’t even need a supplemental PCI-Express power cable. The GTX 1660 Super also has a very low power rating at 125 Watts. Due to their small size and power requirements, these cards are good options for small form factor PCs with non-gaming oriented power supplies.

Test Setup

Testing was done in Windows 10 using Folding@Home Client version 7.5.1. The Nvidia Graphics Card driver version was 445.87. All power measurements were made at the wall (measuring total system power consumption) with my trusty P3 Kill-A-Watt Power Meter. Performance numbers in terms of Points Per Day (PPD) were estimated from the client during individual work units. This is a departure from my normal PPD metric (averaging the time-history results reported by Folding@Home’s servers), but was necessary due to the recent lack of work units caused by the surge in F@H users due to COVID-19.

Note: This will likely be the last test I do with my aging AMD FX-8320e based desktop, since the motherboard only supports PCI Express 2.0. That is not a problem for the cards tested here, but Folding@Home on very fast modern cards (such as the GTX 2080 Ti) shows a modest slowdown if the cards are limited by PCI Express 2.0 x16 (around 10%). Thus, in the next article, expect to see a new benchmark machine!

System Specs:

  • CPU: AMD FX-8320e
  • Mainboard : Gigabyte GA-880GMA-USB3
  • GPU: EVGA 1080 Ti (Reference Design)
  • Ram: 16 GB DDR3L (low voltage)
  • Power Supply: Seasonic X-650 80+ Gold
  • Drives: 1x SSD, 2 x 7200 RPM HDDs, Blu-Ray Burner
  • Fans: 1x CPU, 2 x 120 mm intake, 1 x 120 mm exhaust, 1 x 80 mm exhaust
  • OS: Win10 64 bit

Goal of the Testing

For those of you who have been following along, you know that the point of this blog is to determine not only which hardware configurations can fight the most cancer (or coronavirus), but to determine how to do the most science with the least amount of electrical power. This is important. Just because we have all these diseases (and computers to combat them with) doesn’t mean we should kill the planet by sucking down untold gigawatts of electricity.

To that end, I will be reporting the following:

Net Worth of Science Performed: Points Per Day (PPD)

System Power Consumption (Watts)

Folding Efficiency (PPD/Watt)

As a side-note, I used MSI afterburner to reduce the GPU Power Limit of the GTX 1660 Super and GTX 1650 to the minimum allowed by the driver / board vendor (in this case, 56% for the 1660 and 50% for the 1650). This is because my previous testing, plus the results of various people in the Folding@Home forums and all over, have shown that by reducing the power cap on the card, you can get an efficiency boost. Let’s see if that holds true for the Turing architecture!

Performance

The following plots show the two new Turing architecture cards relative to everything else I have tested. As can be seen, these little cards punch well above their weight class, with the GTX 1660 Super and GTX 1650 giving the 1070 Ti and 1060 a run for their money. Also, the power throttling applied to the cards did reduce raw PPD, but not by too much.

Nvidia GTX 1650 and 1660 performance

Power Draw

This is the plot where I was most impressed. In the summer, any Folding@Home I do directly competes with the air conditioning. Running big graphics cards, like the 1080 Ti, causes not only my power bill to go crazy due to my computer, but also due to the increased air conditioning required.

Thus, for people in hot climates, extra consideration should be given to the overall power consumption of your Folding@Home computer. With the GTX 1660 running in reduced power mode, I was able to get a total system power consumption of just over 150 watts while still making over 500K PPD! That’s not half bad. On the super low power end, I was able to beat the GTX 1050’s power consumption level…getting my beastly FX-8320e 8-core rig to draw 125 watts total while folding was quite a feat. The best thing was that it still made almost 300K PPD, which is well above last generations small cards.

Nvidia GTX 1650 and 1660 Power Consumption

Efficiency

This is my favorite part. How do these low-power Turing cards do on the efficiency scale? This is simply looking at how many PPD you can get per watt of power draw at the wall.

Nvidia GTX 1650 and 1660 Efficiency

And…wow! Just wow. For about $220 new, you can pick up a GTX 1660 Super and be just as efficient than the previous generation’s top card (GTX 1080 Ti), which still goes for $400-500 used on eBay. Sure the 1660 Super won’t be as good of a gaming card, and it  makes only about 2/3’s the PPD as the 1080 Ti, but on an energy efficiency metric it holds its own.

The GTX 1650 did pretty good as well, coming in somewhere towards the middle of the pack. It is still much more efficient than the similar market segment cards of the previous generation (GTX 1050), but it is overall hampered by not being able to return work units as quickly to the scientists, who prioritize fast work with bonus points (Quick Return Bonus).

Conclusion

NVIDIA’s entry-level Turing architecture graphics cards perform very well in Folding@Home, both from a performance and an efficiency standpoint. They offer significant gains relative to legacy cards, and can be a good option for a budget Folding@Home build.

Join My Team!

Interested in fighting COVID-19, Cancer, Alzheimer’s, Parkinson’s, and many other diseases with your computer? Please consider downloading Folding@Home and joining Team Nuclear Wessels (54345). See my tutorial here.

Folding@Home Review: NVIDIA GeForce GTX 1080 Ti

Released in March 2017, Nvidia’s GeForce GTX 1080 Ti was the top-tier card of the Pascal line-up. This is the graphics card that super-nerds and gamers drooled over. With an MSRP of $699 for the base model, board partners such as EVGA, Asus, Gigabyte, MSI, and Zotac (among others) all quickly jumped on board (pun intended) with custom designs costing well over the MSRP, as well as their own takes on the reference design.

GTX 1080 Ti Reference EVGA

EVGA GeForce GTX 1080 Ti – Reference

Three years later, with the release of the RTX 2080 Ti, the 1080 Ti still holds its own, and still commands well over $400 on the used market. These are beastly cards, capable of running most games with max settings in 4K resolutions.

But, how does it fold?

Folding@Home

Folding at home is a distributed computing project originally developed by Stanford University, where everyday users can lend their PC’s computational horsepower to help disease researchers understand and fight things like cancer, Alzheimer’s, and most recently the COVID-19 Coronavirus. User’s computers solve molecular dynamics problems in the background, which help the Folding@Home Consortium understand how proteins “misfold” to cause disease. For computer nerds, this is an awesome way to give (money–>electricity–>computer work–>fighting disease).

Folding at home (or F@H) can be run on both CPUs and GPUs. CPUs provide a good baseline of performance, and certain molecular simulations can only be done here. However, GPUs, with their massively parallel shader cores, can do certain types of single-precision math much faster than CPUs. GPUs provide the majority of the computational performance of F@H.

Geforce GTX 1080 Ti Specs

The 1080 Ti is at the top of Nvidia’s lineup of their 10-series cards.

1080 Ti Specs

With 3584 CUDA Cores, the 1080 Ti is an absolute beast. In benchmarks, it holds its own against the much newer RTX cards, besting even the RTX 2080 and matching the RTX 2080 Super. Only the RTX 2080 Ti is decidedly faster.

Folding@Home Testing

Testing is performed in my old but trusty benchmark machine, running Windows 10 Pro and using Stanford’s V7 Client. The Nvidia graphics driver version was 441.87. Power consumption measurements are taken on the system-level using a P3 Watt Meter at the wall.

System Specs:

  • CPU: AMD FX-8320e
  • Mainboard : Gigabyte GA-880GMA-USB3
  • GPU: EVGA 1080 Ti (Reference Design)
  • Ram: 16 GB DDR3L (low voltage)
  • Power Supply: Seasonic X-650 80+ Gold
  • Drives: 1x SSD, 2 x 7200 RPM HDDs, Blu-Ray Burner
  • Fans: 1x CPU, 2 x 120 mm intake, 1 x 120 mm exhaust, 1 x 80 mm exhaust
  • OS: Win10 64 bit

I did extensive testing of the 1080 Ti over many weeks. Folding@Home rewards donors with “Points” for their contributions, based on how much science is done and how quickly it is returned. A typical performance metric is “Points per Day” (PPD). Here, I have averaged my Points Per Day results out over many work units to provide a consistent number. Note that any given work unit can produce more or less PPD than the average, with variation of 10% being very common. For example, here are five screen shots of the client, showing five different instantaneous PPD values for the 1080 Ti.

 

GTX 1080 Ti Folding@Home Performance

The following plot shows just how fast the 1080 Ti is compared to other graphics cards I have tested. As you can see, with nearly 1.1 Million PPD, this card does a lot of science.

1080 Ti Folding Performance

GTX 1080 Ti Power Consumption

With a board power rating of 250 Watts, this is a power hungry graphics card. Thus, it isn’t surprising to see that power consumption is at the top of the pack.

1080 Ti Folding Power

GTX 1080 Ti Efficiency

Power consumption alone isn’t the whole story. Being a blog about doing the most work possible for the least amount of power, I am all about finding Folding@Home hardware that is highly efficient. Here, efficiency is defined as Performance Out / Power In. So, for F@H, it is PPD/Watt. The best F@H hardware is gear that maximizes disease research (performance) done per watt of power consumed.

Here’s the efficiency plot.

1080 Ti Folding Efficiency

Conclusion

The Geforce GTX 1080 Ti is the fastest and most efficient graphics card that I’ve tested so far for Stanford’s Folding@Home distributed computing project. With a raw performance of nearly 1.1 Million PPD in windows and an efficiency of almost 3500 PPD/Watt, this card is a good choice for doing science effectively.

Stay tuned to see how Nvidia’s latest Turing architecture stacks up.

GTX 460 Graphics Card Review: Is Folding on Ancient Hardware Worth It?

Recently, I picked up an old Core 2 duo build on Ebay for $25 + shipping. It was missing some pieces (Graphics card, drives, etc), but it was a good deal, especially for the all-metal Antec P182 case and included Corsair PSU + Antec 3-speed case fans. So, I figured what the heck, let’s see if this vintage rig can fold!

Antec 775 Purchase

To complement this old Socket 775 build, I picked up a well loved EVGA GeForce GTX 460 on eBay for a grand total of $26.85. It should be noted that this generation of Nvidia graphics cards (based on the Fermi architecture from back in 2010) is the oldest GPU hardware that is still supported by Stanford. It will be interesting to see how much science one of these old cards can do.

GTX 460 Purchase

I supplied a dusty Western Digital 640 Black Hard Drive that I had kicking around, along with a TP Link USB wireless adapter (about $7 on Amazon). The Operating System was free (go Linux!). So, for under $100 I had this setup:

  • Case: Antec P182 Steel ATX
  • PSU: Corsair HX 520
  • Processor: Intel Core2duo E8300
  • Motherboard: EVGA nForce 680i SLI
  • Ram: 2 x 2 GB DDR2 6400 (800 MHz)
  • HDD: Western Digital Black 640GB
  • GPU: EVGA GeForce GTX 460
  • Operating System: Ubuntu Linux 18.04
  • Folding@Home Client: V7

I fired up folding, and after some fiddling I got it running nice and stable. The first thing I noticed was that the power draw was higher than I had expected. Measured at the wall, this vintage folding rig was consuming a whopping 220 Watts! That’s a good deal more than the 185 watts that my main computer draws when folding on a modern GTX 1060. Some of this is due to differences in hardware configuration between the two boxes, but one thing to note is that the older GTX 460 has a TDP of 160 watts, whereas the GTX 1060 has a TDP of only 120 Watts.

Here’s a quick comparison of the GTX 460 vs the GTX 1060. At the time of their release, both of these cards were Nvidia’s baseline GTX model, offering serious gaming performance for a better price than the more aggressive GTX -70 and -80-series variants. I threw a GTX 1080 into the table for good measure.

GTX 460 Spec Comparison

GTX 460 Specification Comparison

The key takeaways here are that six years later, the equivalent graphics card to the GTX 460 was over three and a half times faster while using forty watts less power.

Power Consumption

I typically don’t report power consumption directly, because I’m more interested in optimizing efficiency (doing more work for less power). However, in this case, there is an interesting point to be made by looking at the wattage numbers directly. Namely, the GTX 460 (a mid-range card) uses almost as much power as a modern high-end GTX 1080, and uses seriously more power than the modern GTX 1060 mid-range card. Note: these power consumption numbers must be taken with a grain of salt, because the GTX 460 was installed in a different host system (the Core2 Duo rig) as the other cards, but the resutls are still telling. This is also consistent with the advertised TDP of the GTX 460, which is 40 watts higher than the GTX 1060.

GTX 460 Power Consumption (Wall)

Total System Power Consumption

Folding@Home Results

Folding on the old GTX 460 produced a rough average of 20,000 points per day, with the normal +/- 10% variation in production seen between work units. Back in 2006 when I was making a few hundred PPD on an old Athlon 64 X2 CPU, this would have been a huge amount of points! Nowadays, this is not so impressive. As I mentioned before, the power consumption at the wall for this system was 220 Watts. This yields an efficiency of 20,000 PPD / 220 Watts = 90 PPD/Watt.

Based off the relative performance, one would think the six-year newer GTX 1060 would produce somewhere between 3 and 4 times as many PPD as the older 460 card. This would mean roughly 60-80K PPD. However, my GTX 1060 frequently produces over 300K PPD. This is due to Stanford’s Quick Return Bonus, which essentially rewards donors for doing science quickly. You can read more about this incentive-based points system at Stanford’s website. The gist is, the faster you return a work unit to the scientists, the sooner they can get to developing cures for diseases. Thus, they award you more points for fast work. As the performance plot below shows, this quick return bonus really adds up, so that someone doing 3-4 times more (GTX 1060 vs. GTX 460 linear benchmark performance) results in 15 times more F@H performance.

GTX 460 Performance and Efficiency

Old vs. New Graphics Card Comparison: Folding@Home Efficiency and PPD

This being a blog about energy-conscious computing, I’d be remiss if I didn’t point out just how inefficient the ancient GTX 460 is compared to the newer cards. Due to the relatively high power consumption for a midrange card, the GTX 460 is eighteen times less efficient than the GTX 1060, and a whopping thirty three times less efficient than the GTX 1080.

Conclusion

Stanford eventually drops support for old hardware (anyone remember PS3 folding?), and it might not be long before they do the same for Fermi-based GPUs. Compared with relatively modern GPUs, the GTX 460 just doesn’t stack up in 2020. Now that the 10-series cards are almost four years old, you can often get GTX 1060s for less than $200 on eBay, so if you can afford to build a folding rig around one of these cards, it will be 18 times more efficient and make 15 times more points.

Still, I only paid about $100 total to build this vintage folding@home rig for this experiment. One could argue that putting old hardware to use like this keeps it out of landfills and still does some good work. Additionally, if you ignore bonus points and look at pure science done, the GTX 460 is “only” about 4 times slower than its modern equivalent.

Ultimately, for the sake of the environment, I can’t recommend folding on graphics cards that are many years out of date, unless you plan on using the machine as a space heater to offset heating costs in the winter. More on that later…

Addendum

Since doing the initial testing and outline for this article, I picked up a GTX 480 and a few GTX 980 Ti cards. Here are some updated plots showing these cards added to the mix. The GTX 480 was tested in the Core2 build, and the GTX 980 Ti in my standard benchmark rig (AMD FX-based Socket AM3 system).

Various GPU Power Consumption

GTX 980 and 480 Performance

GTX 980 and 480 Efficiency

I think the conclusion holds: even though the GTX 480 is slightly faster and more efficient than it’s little brother, it is still leaps and bounds worse than the more modern cards. The 980 Ti, being a top-tier card from a few generations back, holds its own nicely, and is almost as efficient as a GTX 1060. I’d say that the 980 Ti is still a relatively efficient card to use in 2020 if you can get one for cheap enough.

AMD Radeon RX 580 8GB Folding@Home Review

Hello again.

Today, I’ll be reviewing the AMD Radeon RX 580 graphics card in terms of its computational performance and power efficiency for Stanford University’s Folding@Home project. For those that don’t know, Folding@Home lets users donate their computer’s computations to support disease research. This consumes electrical power, and the point of my blog is to look at how much scientific work (Points Per Day or PPD) can be computed for the least amount of electrical power consumption. Why? Because in trying to save ourselves from things like cancer, we shouldn’t needlessly pollute the Earth. Also, electricity is expensive!

The Card

AMD released the RX 580 in April 2017 with an MSRP of $229. This is an updated card based on the Polaris architecture. I previously reviewed the RX 480 (also Polaris) here, for those interested. I picked up my MSI-flavored RX 580 in 2019 on eBay for about $120, which is a pretty nice depreciated value. Those who have been following along know that I prefer to buy used video cards that are 2-3 years old, because of the significant initial cost savings, and the fact that I can often sell them for the same as I paid after running Folding for a while.

RX_580

MSI Radeon RX 580

I ran into an interesting problem installing this card, in that at 11 inches long, it was about a half inch too long for my old Raidmax Sagitta gaming case. The solution was to take the fan shroud off, since it was the part that was sticking out ever so slightly. This involved an annoying amount of disassembly, since the fans actually needed to be removed from the heat sink for the plastic shroud to come off. Reattaching the fans was a pain (you need a teeny screw driver that can fit between the fan blade gaps to get the screws by the hub).

RX_580_noShroud

RX 580 with Fan Shroud Removed. Look at those heat pipes! This card has a 185 Watt TDP (Board Power Rating). 

RX_580_Installed

RX 580 Installed (note the masking tape used to keep the little side LED light plate off of the fan)

RX_580_tightFit

Now That’s a Tight Fit (the PCI Express Power Plug on the video card is right up against the case’s hard drive bays)

The Test Setup

Testing was done on my rather aged, yet still able, AMD FX-based system using Stanford’s Folding@Home V7 client. Since this is an AMD graphics card, I made sure to switch the video card mode to “compute” within the driver panel. This optimizes things for Folding@home’s workload (as opposed to games).

Test Setup Specs

  • Case: Raidmax Sagitta
  • CPU: AMD FX-8320e
  • Mainboard : Gigabyte GA-880GMA-USB3
  • GPU: MSI Radeon RX 580 8GB
  • Ram: 16 GB DDR3L (low voltage)
  • Power Supply: Seasonic X-650 80+ Gold
  • Drives: 1x SSD, 2 x 7200 RPM HDDs, Blu-Ray Burner
  • Fans: 1x CPU, 2 x 120 mm intake, 1 x 120 mm exhaust, 1 x 80 mm exhaust
  • OS: Win10 64 bit
  • Video Card Driver Version: 19.10.1

 

Performance and Power

I ran the RX 580 through its paces for about a week in order to get a good feel for a variety of work units. In general, the card produced as high as 425,000 points per day (PPD), as reported by Stanford’s servers. The average was closer to 375K PPD, so I used that number as my final value for uninterrupted folding. Note that during my testing, I occasionally used the machine for other tasks, so you can see the drops in production on those days.

RX 580 Client

Example of Client View – RX 580

RX580 History

RX 580 Performance – About 375K PPD

I measured total system power consumption at the wall using my P3 Watt Meter. The system averaged about 250 watts. That’s on the higher end of power consumption, but then again this is a big card.

Comparison Plots

RX 580 Performance

AMD Radeon RX 580 Folding@Home Performance Comparison

RX 580 Efficiency

AMD Radeon RX 580 Folding@Home Efficiency Comparison

Conclusion

For $120 used on eBay, I was pretty happy with the RX 580’s performance. When it was released, it was directly competing with Nvidia’s GTX 1060. All the gaming reviews I read showed that Team Red was indeed able to beat Team Green, with the RX 580 scoring 5-10% faster than the 1060 in most games. The same is true for Folding@Home performance.

However, that is not the end of the story. Where the Nvidia GTX 1060 has a 120 Watt TDP (Thermal Dissipated Power), AMD’s RX 580 needs 185 Watts. It is a hungry card, and that shows up in the efficiency plots, which take the raw PPD (performance) and divide out the power consumption in watts I am measuring at the wall. Here, the RX 580 falls a bit short, although it is still a healthy improvement over the previous generation RX 480.

Thus, if you care about CO2 emissions and the cost of your folding habits on your wallet, I am forced to recommend the GTX 1060 over the RX 580, especially because you can get one used on eBay for about the same price. However, if you can get a good deal on an RX 580 (say, for $80 or less), it would be a good investment until more efficient cards show up on the used market.

Folding@Home: Nvidia GTX 1080 Review Part 3: Memory Speed

In the last article, I investigated how the power limit setting on an Nvidia Geforce GTX 1080 graphics card could affect the card’s performance and efficiency for doing charitable disease research in the Folding@Home distributed computing project. The conclusion was that a power limit of 60% offers only a slight reduction in raw performance (Points Per Day), but a large boost in energy efficiency (PPD/Watt). Two articles ago, I looked at the effect of GPU core clock. In this article, I’m experimenting with a different variable. Namely, the memory clock rate.

The effect of memory clock rate on video games is well defined. Gamers looking for the highest frame rates typically overclock both their graphics GPU and Memory speeds, and see benefits from both. For computation projects like Stanford University’s Folding@Home, the results aren’t as clear. I’ve seen arguments made both ways in the hardware forums. The intent of this article is to simply add another data point, albeit with a bit more scientific rigor.

The Test

To conduct this experiment, I ran the Folding@Home V7 GPU client for a minimum of 3 days continuously on my Windows 10 test computer. Folding@Home points per day (PPD) numbers were taken from Stanford’s Servers via the helpful team at https://folding.extremeoverclocking.com.  I measured total system power consumption at the wall with my P3 Kill A Watt meter. I used the meter’s KWH function to capture the total energy consumed, and divided out by the time the computer was on in order to get an average wattage value (thus eliminating a lot of variability). The test computer specs are as follows:

Test Setup Specs

  • Case: Raidmax Sagitta
  • CPU: AMD FX-8320e
  • Mainboard : Gigabyte GA-880GMA-USB3
  • GPU: Asus GeForce 1080 Turbo
  • Ram: 16 GB DDR3L (low voltage)
  • Power Supply: Seasonic X-650 80+ Gold
  • Drives: 1x SSD, 2 x 7200 RPM HDDs, Blu-Ray Burner
  • Fans: 1x CPU, 2 x 120 mm intake, 1 x 120 mm exhaust, 1 x 80 mm exhaust
  • OS: Win10 64 bit
  • Video Card Driver Version: 372.90

I ran this test with the memory clock rate at the stock clock for the P2 power state (4500 MHz), along with the gaming clock rate of 5000 MHz and a reduced clock rate of 4000 MHz. This gives me three data points of comparison. I left the GPU core clock at +175 MHz (the optimum setting from my first article on the 1080 GTX) and the power limit at 100%, to ensure I had headroom to move the memory clock without affecting the core clock. I verified I wasn’t hitting the power limit in MSI Afterburner.

*Update. Some people may ask why I didn’t go beyond the standard P0 gaming memory clock rate of 5000 MHz (same thing as 10,000 MHz double data rate, which is the card’s advertised memory clock). Basically, I didn’t want to get into the territory where the GDDR5’s error checking comes into play. If you push the memory too hard, there can be errors in the computation but work units can still complete (unlike a GPU core overclock, where work units will fail due to errors). The reason is the built-in error checking on the card memory, which corrects errors as they come up but results in reduced performance. By staying away from 5000+ MHz territory on the memory, I can ensure the relationship between performance and memory clock rate is not affected by memory error correction.

1080 Memory Boost Example

Memory Overclocking Performed in MSI Afterburner

Tabular Results

I put together a table of results in order to show how the averaging was done, and the # of work units backing up my +500 MHz and -500 MHz data points. Having a bunch of work units is key, because there is significant variability in PPD and power consumption numbers between work units. Note that the performance and efficiency numbers for the baseline memory speed (+0 MHz, aka 4500 MHz) come from my extended testing baseline for the 1080 and have even more sample points.

Geforce 1080 PPD Production - Ram Study

Nvidia GTX 1080 Folding@Home Production History: Data shows increased performance with a higher memory speed

Graphic Results

The following graphs show the PPD, Power Consumption, and Efficiency curves as a function of graphics card memory speed. Since I had three points of data, I was able to do a simple three-point-curve linear trendline fit. The R-squared value of the trendline shows how well the data points represent a linear relationship (higher is better, with 1 being ideal). Note that for the power consumption, the card seems to have used more power with a lower memory clock rate than the baseline memory clock. I am not sure why this is…however, the difference is so small that it is likely due to work unit variability or background tasks running on the computer. One could even argue that all of the power consumption results are suspect, since the changes are so small (on the order of 5-10 watts between data points).

Geforce 1080 Performance vs Ram Speed

Geforce 1080 Power vs Ram Speed

Geforce 1080 Efficiency vs Ram Speed

Conclusion

Increasing the memory speed of the Nvidia Geforce GTX 1080 results in a modest increase in PPD and efficiency, and arguably a slight increase in power consumption. The difference between the fastest (+500 MHz) and slowest (-500 MHz) data points I tested are:

PPD: +81K PPD (11.5%)

Power: +9.36 Watts (3.8%)

Efficiency: +212.8 PPD/Watt (7.4%)

Keep in mind that these are for a massive difference in ram speed (5000 MHz vs 4000 MHz).

Another way to look at these results is that underclocking the graphics card ram in hopes of improving efficiency doesn’t work (you’ll actually lose efficiency). I expect this trend will hold true for the rest of the Nvidia Pascal series of cards (GTX 10xx), although so far my testing of this has been limited to this one card, so your mileage may vary. Please post any insights if you have them.

NVIDIA GEFORCE GTX 1080 Folding@Home Review (Part 2)

Welcome back. In the last article, I found that the GeForce GTX 1080 is an excellent graphics card for contributing to Stanford University’s charitable distributed computing project Folding@Home. For Part 2 of the review, I did some extended testing to determine the relationship between the card’s power target and Folding@Home performance & efficiency.

Setting the graphics card’s power target to something less than 100% essentially throttles the card back (lowers the core clock) to reduce power consumption and heat. Performance generally drops off, but computational efficiency (performance/watt of power used) can be a different story, especially for Folding@Home. If the amount of power consumed by the card drops off faster than the card’s performance (measured in Points Per Day for Folding@Home), then the performance can actually go up!

Test Methodology

The test computer and environment was the same as in Part 1. Power measurements were made at the wall with a P3 Kill A Watt meter, using the KWH function to track the total energy used by the computer and then dividing by the recorded uptime to get an average power over the test period. Folding@Home PPD Returns were taken from Stanford’s collection servers.

To gain useful statistics, I set the power limit on the graphics card driver via MSI Afterburner and let the card run for a week at each setting. Averaging the results over many days is needed to reduce the variability seen across work units. For example, I used an average of 47 work units to come up with the performance of 715K PPD for the 80% Power Limit case:

Work Unit Averaging

80% Power Limit: Average PPD Calculation over Six Days

The only outliers I tossed was one day when my production was messed up by thunderstorms (unplug your computers if there is lighting!), plus one of the days at the 60% power setting, where for some reason the card did almost 900K PPD (probably got a string of high value work units). Other than that the data was not massaged.

I tested the card at 100% power target, then at 80%, 70%, 60%, and 50% (90% did not result in any differences vs 100% because folding doesn’t max out the graphics card, so essentially it was folding at around 85% of the card’s power limit even when set to 90% or 100%).

FAH 1080 Power Target Example

Setting the Power Limit in MSI Afterburner

I left the core clock boost setting the same as my final test value from the first part of this review (+175 MHz). Note that this won’t force the card to run at a set faster speed…the power limit constantly being hit causes the core clock to drop. I had to reduce the power limit to 80% to start seeing an effect on the core clock. Further reductions in power limit show further reductions in clock rate, as expected. The approximate relationship between power limit and core clock was this:

Core Clock vs Power Limit

GTX 1080 Core Clock vs. Power Limit

Results

As expected, the card’s raw performance (measured in Points Per Day) drops off as the power target is lowered.

GTX 1080 Performance Part 2

Folding@Home Performance

 

The system power consumption plot is also very interesting. As you can see, I’ve shaved a good amount of power draw off of this build by downclocking the card via the power limit. GTX 1080 Power Consumption

 

By far, the most interesting result is what happens to the efficiency. Basically, I found that efficiency increases (to a point) with decreasing power limit. I got the best system efficiency I’ve ever seen with this card set to 60% power limit (50% power limit essentially produced the same result).

GTX 1080 Efficiency Part 2

Folding@Home Efficiency

Conclusion

For NVIDIA’s Geforce GTX 1080, decreasing a graphic’s card’s power limit can actually improve the efficiency of the card for doing computational computing in Folding@Home. This is similar to what I found when reviewing the 1060. My recommended setting for the 1080 is a power limit of 60%, because that provides a system efficiency of nearly 3500 PPD/Watt and maintains a raw performance of almost 700K PPD.