AMD Radeon RX 480 Folding@Home Review

I’ve been reviewing a lot of Nvidia cards lately, so it’s high time I mixed it up a bit. The 4xx series of cards from AMD were released in June 2016, and featured AMD’s new Polaris 14nm architecture. The flagship card, the RX 480, was available in a 4 GB and 8 GB version. The Polaris architecture, which in the RX 480 features 2034 stream processors at a base clock rate of 1120 MHz (1266 boost) and a TDP of 150 watts, was designed to be more efficiency than the aging Fiji architecture used in the R5/R7/R9 300 series.

Now that these cards can be obtained relatively inexpensively on eBay. I picked up a second hand 8 GB card from XFX for $90. Let’s see how it folds compared to some similar graphics cards from Nvidia from that time period. Namely the 1050 and 1060.

 

IMG_20190202_165117036

XFX Radeon RX 480 – 8GB – 150 Watt TDP

Folding@Home testing was done with in Windows 10 on my AMD FX-based test system. The folding@home client was version 7.5.1. The GPU slot options were configured as usual for maximum points per day (PPD) jobs:

Name: client-type  Value: advanced

Name: max-packet-size Value: big

The video driver used was Crimson ReLive 17.7, which includes an essential option for running compute jobs like Folding@Home. This is the ‘compute’ mode for GPU Workload. As previously reported by other folders, this setting can offer significant performance improvement vs. the default gaming setting. I tested it both ways.

AMD Compute Mode

Make sure to set GPU Workload to ‘Compute’ for running Folding@Home Work Units!

Monitoring of the card while folding was done with MSI Afterburner. My particular version of the card by XFX got up to about 76 degrees C when folding, which is pretty warm but not dangerous. The fan settings were on auto, and it was spinning nice and quietly at a touch over 50% speed. The GPU workload % was nicely maxed out at 100 percent, which is something not typically seen on Nvidia cards in Windows. As expected, Folding@Home doesn’t use the full 150 watt TDP. The power usage, as reported at the card, bounced around but was centered at about 110 watts. Although it is expected that the actual power usage would be less than the TDP, this is a lot less, especially considering the 100% GPU usage. I suspect something might be fishy, considering my total system power consumption was pretty high (more on that later).

RX 480 Stock Settings

RX 480 Settings while Folding

Initially, I tested out the driver setting to see if there was a difference between ‘graphics’ and ‘compute’ mode. Although I didn’t see much of a power consumption change (hard to tell since it bounces around), the PPD as reported from the client did change. Note for this testing, I just flipped the switch and observed the time-averaged PPD results as reported from the client. The key here is the project (14152) was the same in both cases, so the result is directly comparable.

In Graphics Mode:

PPD (Estimated) = 290592, TPF (Estimated) = 3 minutes 12 seconds

In Compute Mode:

PPD (Estimated) = 304055, TPF (Estimated) = 2 minutes 59 seconds

That is a pretty significant increase in performance by just flipping a switch. In short, on AMD cards running Folding@Home, always use compute mode.

Here are the screen shots from the client to back this up:

RX 480 Graphics Mode Client View

AMD RX 480 – Graphics Mode

RX 480 Compute Mode Client View

AMD RX 480 – Compute Mode

If you’ve been following along, you know I don’t like to rely on the client’s estimated values for overall PPD numbers. The reason is that it is just an estimate, and it varies a lot between work units. However, for this quick test of graphics vs. compute mode on the same work unit, the results are consistent with those found by other testers.

Overall Performance and Efficiency

I like to run cards for a few days on a variety of work units in order to get some statistics, which I can average to provide more certain results. In this case, I ran folding@home on my RX 480 for over three days. Here are the stats from Stanford’s server, as reported by the kind folks over at Extreme Overclocking.

RX 480 Stats History

Folding @ Home Server Statistics – AMD RX 480 Over 3 Days

As you can see, the average PPD of about 245K PPD wasn’t that impressive, although to be fair the other cards on this plot are all in higher performance price points, except possibly the 1060. I also think this card has potential to churn out over 300k PPD as estimated by the client. This thread seems to suggest this is possible, although the card in that test was overclocked to 1328 MHz vs the 1288 MHz I was running (I didn’t have time to do any overclock testing on mine).

Power consumption measured at the wall varied a bit with the different work units. Spot-checking the numbers with my P3 watt meter resulted in an approximate average total system power consumption of 243 watts. This is much higher than my EVGA GTX 1060 (185 watts at the wall). Just going by the TDP of both cards, I would have guessed the wall power consumption to be somewhere around 215 watts (since the TDP of the RX 480 is 30 watts higher than the 1060).

I ended up selling this card on Ebay a lot faster than I had planned, so I wasn’t able to do detailed testing. However, I suspect the actual power consumption at the card was much higher than what was being reported in MSI Afterburner. After doing some research, it turns out the RX 480 is known to overdraw from both the PCI Express Slot and the supplemental PCI-E power cable. For a card designed to be more efficient, this one is a failure.

Performance Comparison

RX 480 Performance Plot

AMD RX 480 Folding@Home Performance Comparison

Efficiency Comparison

RX 480 Efficiency Plot

AMD RX 480 Folding@Home Efficiency Comparison

Conclusion

The AMD RX 480 produces about 245K PPD while using a surprisingly high 243 watts of system power (measured at the wall). The efficiency is thus about 1000 PPD/Watt. Although better than AMD’s older cards such as a Radeon 7970, these numbers aren’t very competitive, especially when compared to Nvidia’s GTX 1060 (a similarly-priced card from 2016). As of Feb. 2019, the RX 480 can be obtained used for about $100, and the GTX 1060 for $120. If you’re considering buying one of these older cards to do some charitable science with Folding@Home, I recommend spending the extra $20 on the Nvidia 1060, especially because with a mild overclock and a few driver tweaks (use the 372.90 drivers), the Nvidia 1060 can crank out over 350K PPD.

TL;DR: The AMD RX 480 isn’t a very efficient graphics card for running Folding@Home. However, the XFX Version has Pretty Lights…

RX 580 by XFX

Ahh, pretty lights!

Advertisements

Folding@Home Efficiency vs. GPU Power Limit

Folding@Home: The Need for Efficiency

Distributed computing projects like Stanford University’s Folding@Home sometimes get a bad rap on account of all the power that is consumed in the name of science.  Critics argue that any potential gains that are made in the area of disease research are offset by the environmental damage caused by thousands of computers sucking down electricity.

This blog hopes to find a balance by optimizing the way the computational research is done. In this article, I’m going to show how a simple setting in the graphics card driver can improve Folding@Home’s Energy Efficiency.

This blog uses an Nvidia graphics card, but the general idea should also work with AMD cards. The specific card here is an EVGA GeForce GTX 1060 (6 GB).  Green F@H Review here: Folding on the NVidia GTX 1060

If you are folding on a CPU, similar efficiency improvements can be achieved by optimizing the clock frequencies and voltages in the BIOS.  For an example on how to do this, see these posts:

F@H Efficiency: AMD Phenom X6 1100T

F@H Efficiency: Overclock or Undervolt?

(at this point in time I really just recommend folding on a GPU for optimum production and efficiency)

GPU Power Limit Overview

The GPU Power limit slider is a quick way to control how much power the graphics card is allowed to draw. Typically, graphics cards are optimized for speed, with efficiency a second goal (if at all). When a graphics card is pushed harder, it will draw more power (until it runs into the power limit). Today’s graphics cards will also boost their clock rate when loaded, and reduce it when the load goes away. Sometimes, a few extra MHz can be achieved for minimal extra power, but go too far and the amount of power needed to drive the card will grow exponentially. Sure the card is doing a bit more work (or playing a game a bit faster), but the heaps of extra power needed to do this are making it very inefficient.

What I’m going to quickly show is that going the other way (reducing power) can actually improve efficiency, albeit at a reduction of raw output. For  this quick test, I’m just going to look a the default power limit, 100%, vs 50%. Specific tuning is going to be dependent on your actual graphics card. But, with a few days at different settings, you should be able to find a happy balance between performance and efficiency.

For these plots, I used my watt meter to obtain actual power consumption at the wall. You can read about my watt meters here.

Changing the Power Limit

A tool such as MSI Afterburner can be used to view the graphics card’s settings, including the power limit. In the below screenshot, I reduced the card’s power limit by 50% midway through taking data. You can clearly see the power consumption and GPU temperature drop. This suggests the entire computer should be drawing less power from the wall. I confirmed this with my watt meter.

Adjust Power Limit MSI Afterburner

MSI Afterburner is used to reduce the graphics card’s power limit.

Effect on Results

I ran the card for multiple days at each power setting and used Stanford’s actual stats to generate an averaged number for PPD. Reporting an average number like this lends more confidence that the results are real, since PPD as reported in the client varies a lot with time, and PPD can bounce around by +/- 10 percent with different projects.

Below is the production time history plot, courtesy of https://folding.extremeoverclocking.com/. I marked on the plot the actual power consumption numbers I was seeing from my computer at the wall. As you can see, reducing the power limit on the 1060 from 100% to 50% saved about 40 watts of power at the wall.

GTX 1060 F@H Reduced Power Limit Production

GTX 1060 Folding@Home Performance at 100% and 50% Power

On the efficiency plot, you can see that reducing the power limit on the 1060 actually improved its efficiency slightly. This is a great way to fold more effectively.

Nvidia 1060 PPD per Watt Updated

NVidia GTX 1060 Folding@Home Efficiency Results

There is a downside of course, and that is in raw production. The Points Per Day plot below shows a pretty big reduction in PPD for the reduced power 1060, although it is still beating its little brother, the 1050 TI. One of the reasons PPD falls off so hard is that Stanford provides bonus points that are tied to how fast your computer can return a work unit. These points increase exponentially the faster your computer can do work. So, by slowing the card down, we not only lose on base points, but we lose on  the quick return bonus as well.

Nvidia 1060 PPD Updated

NVidia GTX 1060 Folding@Home Performance Results

Conclusion

Reducing the power limit on a graphics card can increase its computational energy efficiency in Folding@Home, although at the cost of raw PPD. There is probably a sweet spot for efficiency vs. performance at some power setting between 50% and 100%. This will likely be different for each graphics card. The process outlined above can be used for various power limit settings to find the best efficiency point.

 

Folding on the Nvidia GTX 1070

Overview

Folding@home is Stanford University’s charitable distributed computing project. It’s charitable because you can donate electricity, as converted into work through your home computer, to fight cancer, Alzheimer’s, and a host of other diseases.  It’s distributed, because anyone can run it with almost any desktop PC hardware.  But, not all hardware configurations are created equally.  If you’ve been following along, you know the point of this blog is to do the most work for as little power consumption as possible.  After all, electricity isn’t free, and killing the planet to cure cancer isn’t a very good trade-off.

Today we’re testing out Folding@home on an EVGA NVIDIA GTX 1070 graphics card.  This card offers a big step up in gaming and compute horsepower compared to the 1060 I reviewed previously, and is capable of pushing solid frame rates at 4K resolution. So, how well does it fold?

Card Specifications (Nvidia Reference Specs)

1070 specs

Nvidia GTX 1070 Specifications

evga 1070 acx stock photo

EVGA Nvidia GTX 1070 ACX 3.0 (photo credit: EVGA)

FOLDING@HOME TEST SETUP

For this test I used my normal desktop computer as the benchmark machine.  Testing was done using Stanford’s V7 client on Windows 10 64-bit running FAH Core 21 work units.  The video driver version used was initially 388.59, and subsequently 372.90. Power consumption measurements reported in the charts were taken at the wall and are thus full system power consumption numbers.

If you’re interested in reading about the hardware configuration of my test rig, it is summarized in this post:

https://greenfoldingathome.com/2017/04/21/cpu-folding-revisited-amd-fx-8320e-8-core-cpu/

Information on my watt meter readings can be found here:

I Got a New Watt Meter!

Initial Testing and Troubleshooting

Like the GTX 1060, the 1070 uses Nvidia’s Pascal architecture, which is very efficient and has a reputation for solid compute performance. The 1070 has 50% more CUDA cores than the 1060, and with Folding@Home’s exponential points system (the quick return bonus gives you more points for doing work quickly), we should see roughly double the PPD of the 1060, which does 300 – 350 thousand PPD depending on the work unit. Based on various people’s experiences, and especially this forum post, I was expecting the 1070 to produce somewhere in the range of 600-700K PPD.

That wasn’t what happened. The card wasn’t exactly slow, but initial testing showed an estimated 450 to 550K PPD, as reported by the client. I ran it for a few days, since PPD can vary a good deal depending on the work unit, but the result was unfortunately the same. 550K PPD was about as much as my card would do.

initial_1070_results

Initial GTX 1070 Results – 544K PPD

At first I thought it might be due to the card running hot. Unlike my test of a brand new 1060, I obtained my 1070 used off of eBay for a great price of $200 dollars + shipping. It was a little dusty, so I blew it all out and fired up MSI Afterburner to check out the temps. Unfortunately, the fans on the card weren’t even breaking a sweat, and it was nice and cool. Points didn’t increase.

evga 1070 acx 3.0

My Used EVGA GTX 1070 ACX 3.0 – eBay Price: $200

initial 1070 afterburner report

MSI Afterburner Report: NVidia GTX 1070, Stock Clocks, Driver 388.59

After doing some more digging, I ran across a few threads online that indicated the 1070 (along with a few other GTX models) don’t always boost up to their maximum clock rates for compute loads. Opening up a video, or Folding@home’s protein viewer, can sometimes force the card to clock up. I tried this and didn’t have any luck. My card was running at the stock clocks, and in fact the memory even appeared to be running 200 Megahertz below the 4000 Mhz reference clock rate. This suggested the card was in a low-power mode.

Thankfully, Nvidia’s System Management Interface tool can be used to see what is going on. This tool, which in Windows 10 lives in C:\Program Files\Nvidia Corporation, can be accessed by the command line. I followed the tutorial here to learn a few things about what my 1070 was doing. Although that write-up is geared at people mining for cryptocurrency, the steps are still releveant.

As can be seen here, my card was in the “P2” state, which is not the high-performance “P0” state. This is why the card wasn’t boosting, and why the memory clock seems diminished.

1070 performance state

Nvidia 1070 Performance State

Another feature of the Nvidia System Management Interface is the ability to get the power consumption at the card. This is measured by the driver, using the card’s hardware, and is the total instantaneous power the card is consuming (PCI slot power + supplemental power connections). As you can see, in the P2 state, the card is very rarely nearing the 150 watt TDP.

Now, this doesn’t necessarily mean the card would get closer to 150 watts in the P0 state. F@H does not utilize every portion of the graphics card, and it is expected that the power consumption would not be right at the limit. Still, these numbers seemed a bit low to me.

1070 card-level power consumption (before tuning)

1070 card-level power consumption (before tuning)

Overclocking Manually to Approximate P0 State

Unlike what was suggested in that crypto mining article, I wasn’t able to use the NVSMI tool to force a P0 state. For some reason, my NVSMI tool wouldn’t show me the available clock rate settings for my 1070. However, manual overclocking with a program such as MSI Afterburner is really easy. By maxing out the power limit and setting the core clock to a higher value, I can basically make the card run at its boost frequency, or higher.

First, I set the power limit to the maximum allowed (112%). Don’t worry, this won’t hurt anything. It is limited in the driver to not cause any damage. Basically, this will allow the card to sip a bit more electricity (albeit at a reduction of efficiency). For a card that was in the P0 state (say, running a video game), this would allow higher boost clocks.

Next, I started upping the core clock in increments of 100 Mhz. I didn’t run into any stability problems, and settled in on a core clock of 2000 Mhz (factory clock is 1506 Mhz / 1683 boost). Note that that factory boost number is deceiving, since the latest drivers will crank the GPU core up past 1900 MHz if there is power and voltage headroom. From what I read, many people can run the 1070 stable at 2050 Mhz without adding voltage.

I decided not to boost the voltage, and to stay 50 Mhz below that supposedly stable number, because it’s not worth risking the stability of Folding@home. We want accurate, repeatable science! Plus, dropping work units is much worse for PPD than running slightly below a card’s maximum capability.

I experimented with clocking the memory up from 3800 MHz to 4000 MHz (note it’s double data rate so this equates to 8000 MHz as reported by some programs). This didn’t seem to affect results. F@H has historically been fairly insensitive to memory clocks, and boosting memory too much can cause slowdowns due to the error-checking routines having to work harder to ensure clean results. Basically, everyone says it’s not worth it. I ran it at 4000 MHz long enough to confirm this (a day), then throttled it back down to 3800 MHz. The benefit here will be more power available for the GPU cores, which is what really counts for folding.

Here are my final overclock numbers. The card has been running with these clocks for a week and a half non-stop, with no stability issues:

final 1070 afterburner report

Overclocked Settings: +160 MHz Core, 112% Power Limit

Note the driver version as shown in the updated Afterburner screen shot is different…as it turns out, this can have a huge effect on F@H PPD. More on that in a moment.

Overclocking Result: An Extra 50,000 PPD

Running the core at 2012 MHz (+160 MHz boost from the P2 power state) and upping the card’s power limit by 12% made the average PPD, as observed over two days, climb from 500-550K PPD to 550K-600K PPD. So, that’s a 50,000 PPD increase for minimal effort. But, something still seemed off. At the time I was still running driver version 388.59, and one of the things I had discovered when searching around for 1070 tuning tips is that not all drivers are created equal.

Nvidia Driver 372.90: The Best Folding Driver for the GTX 1070

Nvidia has been updating drivers with more and more emphasis on gaming optimizations and less on compute. So, it makes sense that older drivers might actually offer better compute performance. There are many threads in the Folding@Home Hardware Forum discussing this, and one driver version that keeps being mentioned is 372.90. It’s a bit tricky to keep it installed on Windows 10, since Windows is always trying to push a newer version, but for my 24/7 folding rig, I installed it and simply never rebooted it in order to get a week’s worth of data.

This driver change alone seemed to also offer a 50,000 point boost. After running various core 21 work units, the GTX 1070’s PPD has stayed between 630,000 and 660,000. This is normal variation between work units, and I feel confident reporting a final PPD of 640K. As I write this, the client is estimating 660K PPD.

final_1070_results

Nvidia GTX 1070: 660K PPD on Project 13815 (Core 21)

This is an excellent result. It’s twice the PPD of the GTX 1060, although eking out that last 100K PPD took a manual overclock plus a driver “update” to an older version.

Now, for the fun part. Efficiency! This 1070 is rated at 150 watts, which is only 30 watts more than the 1060. So we are supposedly doing 100% more science for Stanford University, and for a meager 25% increase in power consumption. Time to bust out the watt meter and find out!

Power Consumption at the Wall

Using my P3 Kill-A-Watt Power Meter, I measured the total system power consumption. This is the same way I measure all of my graphics cards (as opposed to estimating the card’s power by the TDP or using the video card driver to spit out instantaneous card power). The reason is that I like to have a full-system view, factoring in the power usage of my CPU, main board, and RAM, all essential components to keep the card happy.

While folding with the GTX 1070, my system’s total power draw varied between 225 and 230 watts. I’m going to go with 227 watts as the average power number. 

Efficiency

Computing computational efficiency as Points Per Day (PPD) / Power (Watts) gives:

640,000 PPD / 227 Watts = 2820 PPD/Watt.

Conclusion

The Nvidia GTX 1070 is a very efficient card for running Stanford’s Folding@Home Distributed Computing Project. The trend established in my previous articles seems to be continuing, namely that the more expensive high-end video cards are more efficient, despite their higher power draw. In this case of the 1070, some manual overclocking was needed to unlock the full PPD potential. As proven by many others, the default drivers weren’t very good, but the 372.90 drivers really opened it up.

Base PPD: 550,000

Tuned PPD (drivers + overclock) = 640,000

PPD/Watt(@wall) = 2820

1070 ppd plot

Nvidia GTX 1070 Performance Comparison

1070 efficiency plot

Nvidia 1070 Efficiency Comparison

As a final note, this post focused more on PPD than efficiency, since for much of the testing my watt meter was not installed (my kids keep playing with it). At some point in the future, I’ll do an article where I tune one of these cards to find the best efficiency point. This will likely be at a lower power limit than 100%, with perhaps a slight reduction in clock rate.

Update: Where I’ve Been

It’s been a few months (well more than a few), so I figured I should explain why there haven’t been so many articles lately. I’ve always liked writing, be it technical blogs like this one, writing for work, or writing fiction. Back in 2005, I started writing science-fiction for fun, and last year I succeeded in completing my first novel. I’ve still been blogging, although I’ve been writing about that novel-writing project instead of distributed computing. If you’re interested in learning about the realms of self-published science fiction, then please do check out my blog at starfightersf.com.

Another reason for not writing folding@home articles is because I haven’t been folding! Even with solar panels, the amount of electricity we use in our home is astonishing, and adding a F@H energy burden to that didn’t make sense, especially not in the warmer months when it increases the load on the air-conditioning (talk about an environmental double-whammy!).

Instead, I decided to wait until it is nice and cold (like right now), so that I can turn down the oil heat in my basement and crank up the folding rig. This way, the electricity serves two purposes: first, charitable disease research for Stanford, and second, heating my basement and saving oil.

In terms of being energy efficient, this is the best way to go!

So, consider this the official restart of Green F@H for the new year. I’ll be kicking things off with the 1070 I just picked up from eBay for a surprisingly palatable $200. As you might have noticed, I don’t tend to review the latest cards, and that’s simply because of the price tag. Buying last-generation’s cast-off cards used has turned out to be an immense money saver, so if earning PPD/dollar is also on your list of priorities, I highly recommend this method.

Stay tuned for the Nvidia GTX 1070 review!

 

 

Is Folding@Home a Waste of Electricity?

Folding@home has brought together thousands of people (81 thousand active folders as of the time of this writing, as evidenced from Stanford’s One in a Million contributor drive.) This is awesome…tens of thousands of people teaming up to help researchers unravel the mysteries of terrible diseases.

But, there is a cost. If you are reading this blog, then you know the cost of scientific computing projects such as Folding@Home is environmental. In trying to save ourselves from the likes of cancer and Alzheimer’s disease, we are running a piece of software that causes our computers to use more electricity. In the case of dedicated folding@home computers, this can be hundreds of watts of power consumed 24/7. It adds up to a lot of consumed power, that in the end exits your computer as heat (potentially driving up your air conditioning costs as well).

Folding on Graphics Card Thermal

FLIR Thermal Cam – Folding@Home on Graphics Card

If Stanford reaches their goal of 1 million active folders, then we have an order of magnitude more power consumption on our hands. Let’s do some quick math, assuming each folder contributes 200 watts continuous (low compared to the power draw of most dedicated Folding@home machines). In this case, we have 200 watts/computer * 24 hours/day * 365 days/year * 1,000,000 computers *1 kilowatt-hour/1000 watt-hours = 1,752,000,000 kilowatt-hours of power consumed in a year, in the name of Science!

That’s almost two billion kilowatt-hours, people.  It’s 1.75 terawatt-hours (TWh)! Using the EPA’s free converter can put that into perspective. Basically, this is like driving 279 thousand extra cars for a year, or burning 1.5 billion pounds of coal.  Yikes!

https://www.epa.gov/energy/greenhouse-gas-equivalencies-calculator

F@H Energy Equivalence

Potential Folding@Home Environmental Impact

Is all this disease research really harming the planet? If it is, is it worth it? I don’t know. It depends on the outcome of the research, the potential benefit to humans, and the detriment to humans, animals, and the environment caused by that research. This opens up all sorts of what-if scenarios.

For example: what if Folding@Home does help find a future cure for many diseases, which results in extended life-spans. Then, the earth gets even more overpopulated than it is already. Wouldn’t the added environmental stresses negatively impact people’s health? Conversely, what if Folding@Home research results in a cure for a disease that allows a little girl or boy to grow to adulthood and become the inventor of some game-changing green technology?

It’s just not that easy to quantify.

Then, there is the topic of Folding@home vs. other distributed computing projects. Digital currency, for example. Bitcoin miners (and all the spinoffs) suck up a ton of power. Current estimates put Bitcoin alone at over 40 TWH a year.

Source: https://www.theguardian.com/technology/2018/jan/17/bitcoin-electricity-usage-huge-climate-cryptocurrency

That’s more power than some countries use, and twenty times more than my admittedly crude future Folding@home estimate. When you consider that the cryptocurrency product has only limited uses (many of which are on the darkweb for shady purposes), it perhaps helps cast Folding@home in a better light.

There is always room for improvement thought. That is the point of this entire blog. If we crazies are committed to turning our hard-earned dollars into “points”, we might as well do it in the most efficient way possible. And, while we’re at it, we should consider the environmental cost of our hobby and think of ways to offset it (that goes for the Bitcoin folks too).

I once ran across a rant on another online blog about how Folding@home is killing the planet. This was years ago, before the Rise of the Crypto. I wish I could find that now, but it seems to have been lost in the mists of time, long since indexed, ousted, and forgotten by the Google Search Crawler. In it, the author bemoaned over how F@H was murdering mother earth in the name of science. I recall thinking to myself, “hey, they’ve got a point”. And then I realized that I had already done a bunch of things to help combat the rising electric bill, and I bet most distributed computing participants have done some of these things too.

These things are covered elsewhere in this blog, and range from optimizing the computer doing the work to going after other non-folding@home related items to help offset the electrical and environmental cost. I started by switching to LED light-bulbs, then went to using space heaters instead of whole house heating methods in the winter. As I upgraded my Folding@home computer, I made it more energy efficient not just for F@H but for all tasks executed on that machine.

In the last two years, my wife and I bought a house, which gave us a whole other level of control over the situation. We had one of those state-subsidized energy audits done. They put in some insulation and air-sealed our attic, thus reducing our yearly heating costs. Eventually, we even decided to put solar panels on the roof and get an electric car (these last two weren’t because I felt guilty about running F@H, but because my wife and I are just into green technologies). We even use our Folding@home computer as a space heater in the winter, thus offsetting home heating oil use and negating any any environmental arguments against F@H in the winter months.

In conclusion, there is no doubt that distributed projects have an environmental cost. However, to claim that they are a waste of electricity or that they are killing the planet might be taking it too far. One has to ask if the cause is worth the environmental impact, and then figure out ways to lessen that impact (or in some cases get motivated to offset it completely. Solar powered folding farm, anyone?)

Solar Panel in Basement

LG 320 Solar Panel in my basement, awaiting roof install.

Folding on the NVidia GTX 1060

Overview

Folding@home is Stanford University’s charitable distributed computing project. It’s charitable because you can donate electricity, as converted into work through your home computer, to fight cancer, Alzheimer, and a host of other diseases.  It’s distributed, because anyone can run it with almost any desktop PC hardware.  But, not all hardware configurations are created equally.  If you’ve been following along, you know the point of this blog is to do the most work for as little power consumption as possible.  After all, electricity isn’t free, and killing the planet to cure cancer isn’t a very good trade-off.

Today we’re testing out Folding@home on EVGA’s single-fan version of the NVIDIA GTX 1060 graphics card.  This is an impressive little card in that it offers a lot of gaming performance in a small package.  This is a very popular graphics card for gamers who don’t want to spend $400+ on GTX 1070s and 1080s.  But, how well does it fold?

Card Specifications

Manufacturer:  EVGA
Model #:  06G-P4-6163
Model Name: EVGA GeForce GTX 1060 SC GAMING (Single Fan)
Max TDP: 120 Watts
Power:  1 x PCI Express 6-pin
GPU: 1280 CUDA Cores @ 1607 MHz (Boost Clock of 1835 MHz)
Memory: 6 GB GDDR5
Bus: PCI-Express X16 3.0
MSRP: $269

06G-P4-6163-KR_XL_4

EVGA Nvidia GeForce GTX 1060 (photo by EVGA)

Folding@Home Test Setup

For this test I used my normal desktop computer as the benchmark machine.  Testing was done using Stanford’s V7 client on Windows 7 64-bit running FAH Core 21 work units.  The video driver version used was 381.65.  All power consumption measurements were taken at the wall and are thus full system power consumption numbers.

If you’re interested in reading about the hardware configuration of my test rig, it is summarized in this post:

https://greenfoldingathome.com/2017/04/21/cpu-folding-revisited-amd-fx-8320e-8-core-cpu/

Information on my watt meter readings can be found here:

I Got a New Watt Meter!

FOLDING@HOME TEST RESULTS – 305K PPD AND 1650 PPD/WATT

The Nvidia GTX 1060 delivers the best Folding@Home performance and efficiency of all the hardware I’ve tested so far.  As seen in the screen shot below, the native F@H client has shown up to 330K PPD.  I ran the card for over a week and averaged the results as reported to Stanford to come up with the nominal 305K Points Per Day number.  I’m going to use 305 K PPD in the charts in order to be conservative.  The power draw at the wall was 185 watts, which is very reasonable, especially considering this graphics card is in an 8-core gaming rig with 16 GB of ram.  This results in a F@H efficiency of about 1650 PPD/Watt, which is very good.

Screen Shot from F@H V7 Client showing Estimated Points per Day:

1060 TI Client

Nvidia GTX 1060 Folding @ Home Results: Windows V7 Client

Here are the averaged results based on actual returned work units

(Graph courtesy of http://folding.extremeoverclocking.com/)

1060 GTX PPD History

NVidia 1060 GTX Folding PPD History

Note that in this plot, the reported results previous to the circled region are also from the 1060, but I didn’t have it running all the time.  The 305K PPD average is generated only from the work units returned within the time frame of the red circle (7/12 thru 7/21)

Production and Efficiency Plots

Nvidia 1060 PPD

NVidia GTX 1060 Folding@Home PPD Production Graph

Nvidia 1060 PPD per Watt

Nvidia GTX 1060 Folding@Home Efficiency Graph

Conclusion

For about $250 bucks (or $180 used if you get lucky on eBay), you can do some serious disease research by running Stanford University’s Folding@Home distributed computing project on the Nvidia GTX 1060 graphics card.  This card is a good middle ground in terms of price (it is the entry-level in NVidia’s current generation of GTX series of gaming cards).  Stepping up to a 1070 or 1080 will likely continue the trend of increased energy efficiency and performance, but these cards cost between $400 and $800.  The GTX 1060 reviewed here was still very impressive, and I’ll also point out that it runs my old video games at absolute max settings (Skyrim, Need for Speed Rivals).  Being a relatively small video card, it easily fits in a mid-tower ATX computer case, and only requires one supplemental PCI-Express power connector.  Doing over 300K PPD on only 185 watts, this Folding@home setup is both efficient and fast. For 2017, the NVidia 1060 is an excellent bang-for-the-buck Folding@home Graphics Card.

Request: Anyone want to loan me a 1070 or 1080 to test?  I’ll return it fully functional (I promise!)

Folding@Home on the Nvidia GeForce GTX 1050 TI: Extended Testing

Hi again.  Last week, I looked at the performance and energy efficiency of using an Nvidia GeForce GTX 1050 TI to run Stanford’s charitable distributed computing project Folding@home.  The conclusion of that study was that the GTX 1050 TI offers very good Points Per Day (PPD) and PPD/Watt energy efficiency.  Now, after some more dedicated testing, I have a few more thoughts on this card.

Average Points Per Day

In the last article, I based the production and efficiency numbers on the estimated completion time of one work unit (Core 21), which resulted in a PPD of 192,000 and an efficiency of 1377 PPD/Watt.  To get a better number, I let the card complete four work units and report the results to Stanford’s collection server.  The end result was a real-world performance of 185K PPD and 1322 PPD/Watt (power consumption is unchanged at 140 watts @ the wall).  These are still very good numbers, and I’ve updated the charts accordingly.  It should be noted that this still only represents one day of folding, and I am suspicious that this PPD is still on the high end of what this card should produce as an average.  Thus, after this article is complete, I’ll be running some more work units to try and get a better average.

Folding While Doing Other Things

Unlike the AMD Radeon HD 7970 reviewed here, the Nvidia GTX 1050 TI doesn’t like folding while you do anything else on the machine.  To use the computer, we ended up pausing folding on multiple occasions to watch videos and browse the internet.  This results in a pretty big hit in the amount of disease-fighting science you can do, and it is evident in the PPD results.

Folding on a Reduced Power Setting

Finally, we went back to uninterrupted folding on the card, but at a reduced power setting (90%, set using MSI Afterburner).  This resulted in a 7 watt reduction of power consumption as measured at the wall (133 watts vs. 140 watts).  However, in order to produce this reduction in power, the graphics card’s clock speed is reduced, resulting in more than a performance hit.  The power settings can be seen here:

GTX 1050 Throttled

MSI Afterburner is used to reduce GPU Power Limit

Observing the estimated Folding@home PPD in the Windows V7 client shows what appears to be a massive reduction in PPD compared to previous testing.  However, since production is highly dependent on the individual projects and work units, this reduction in PPD should be taken with a grain of salt.

GTX 1050 V7 Throttled Performance

In order to get some more accurate results at the reduced power limit, we let the machine chug along uninterrupted for a week.  Here is the PPD production graph courtesy of http://folding.extremeoverclocking.com/

GTX 1050 Extended Performance Testing

Nvidia GTX 1050 TI Folding@Home Extended Performance Testing

It appears here that the 90% power setting has caused a significant reduction in PPD. However, this is based on having only one day’s worth of results (4 work units) for the 100% power case, as opposed to 19 work units worth of data for the 90% power case. More testing at 100% power should provide a better comparison.

Updated Charts (pending further baseline testing)

GTX 1050 PPD Underpowered

Nvidia GTX 1050 PPD Chart

GTX 1050 Efficiency Underpowered

Nvidia GTX 1050 TI Efficiency

As expected, you can contribute the most to Stanford’s Folding@home scientific disease research with a dedicated computer.  Pausing F@H to do other tasks, even for short periods, significantly reduces performance and efficiency.  Initial results seem to indicate that reducing the power limit of the graphics card significantly hurts performance and efficiency.  However, there still isn’t enough data to provide a detailed comparison, since the initial PPD numbers I tested on the GTX 1050 were based on the results of only 4 completed work units.  Further testing should help characterize the difference.