Tag Archives: F@H

How to Make a Folding@Home Space Heater (and why would you want to?)

My normal posts on this site are all about how to do as much science as possible with Folding@Home, for the least amount of power. This is because I think disease research, while a noble and essential cause, shouldn’t be done without respecting the environment.

With that said, I think there is a use case for a power-hungry, inefficient Folding@Home computer. Namely, as a space heater for those in colder climates.

The logic is this: Running Folding@Home, or any other piece of software, makes your computer do work. Electricity flows through the circuits, flipping tiny silicon switches, and producing heat in the process. Ultimately all of the energy that flows into your computer comes back out as heat (well, a small amount comes out as light, or electromagnetic radiation, or noise, but all of those can and do get converted back into heat as they strike things in the room).

Have you ever noticed how running your gaming computer with the door to your room closed makes your feet nice and toasty in the winter? It’s the same idea. Here, one of my high-performance rigs (dual NVidia 980 Ti GPUs) is silently humming away, putting off about 500 watts of pleasant heat. My son is investigating:

My Folding@Home Space Heater Experiment

Folding@Home uses CPUs and GPUs to run molecular dynamic models to help research understand and fight diseases. You get the most points per day (PPD) by using cutting-edge hardware, but the Folding@Home Consortium and Stanford University openly encourage everyone to run the software on whatever they happen to have.

With this in mind, I started thinking about all the old hardware that is out there…CPUs and graphics cards that are destined for landfills because they are no longer fast enough to do any useful gaming or decode 4K video. People describe this type of hardware as “bricks” or “space heaters”–useful for nothing other than wasting power.

That gave me an idea…

It didn’t take me long to find a sweet deal on an nForce 680i-based system on eBay for $60 shipped (EVGA board with Nvidia n680i chipset, supporting three full-length PCI-E X16 slots). I swapped out the Core 2 Duo that this machine came with for a Core 2 Quad, and purchased four Fermi-based Nvidia graphics cards, plus a used 1300 Watt Seasonic 80+ Gold power supply. All of this was amazingly cheap. The beautiful Antec case was worth the $60 cost of the parts that came with it alone. Because I knew lots of power would be critical here, I spent most of the money on a high-end power supply (also used on eBay). Later on, I found that I needed to also upgrade the cooling (read: cut a hole in the side panel and strap on some more fans).

  • Antec Mid-Tower Case + Corsair 520 Watt PSU, EVGA 680i motherboard, Core 2 Duo CPU, 4 GB Ram, CD Drives, and 4 Fans = $60
  • 2x EVGA Nvidia GeForce GTX 480 graphics cards: $40
  • 1 x EVGA NVidia GeForce GTX 580 Graphics Card: $50
  • 1 x EVGA NVidia GeForce GeForce GTX 460 Graphics Card: $20
  • 1 x PCI-E X1 to X16 Riser: $10
  • 1 x Core 2 Quad Q6600 CPU (Socket 775) – $6
  • 1 x Seasonic 1300 Watt 80+ Gold Modular Power Supply: $90
  • 2 x Noctua 120 MM fans + custom aluminum bracket (for modifying side panel): $60
  • 1 x Arctic Cooling Freezer Tower Cooler – $10
  • 1 x Western Digital Black 640GB HDD – $10

Total Cost (Estimated): $356

This is the cost before I sold some of the parts I didn’t need (Core 2 Duo, Corsair PSU, etc).

Here is a shot of the final build. It took a bit of tweaking to get it to this point.

F@H_Space_Heater_Quad_GPUs

Used Parts Disclaimer!

Note that when dealing with used parts on eBay, it’s always good to do some basic service. For the GPUs in this build, I took them apart, cleaned them, applied fresh thermal paste (Arctic MX-4), and re-assembled. It was good that I did…these cards were pretty gross, and the decade-old thermal paste was dried on from years of use.

 

I mean, come on now, look at the dust cake on the second GTX 480! Clean your graphics cards, random eBay people!

GTX 480 Dust

Here’s how the 3 + 1 GPUs are set up. The two GTX 480s and the GTX 580 are on the mobo in the X16 slots. I remotely mounted the GTX 460 in the drive bay. I used blower-style (slot exhaust) cards on purpose here, because they exhaust 100% of the hot air outside the case. Open-fan style cards would have overheated instantly in this setup.

To keep costs down, I just used Ubuntu Linux as the operating system. I configured the machine for 4-slot GPU folding using proprietary Nvidia drivers. Although I ultimately control all of my remote Linux machines with TeamViewer, it is helpful to have a portable monitor and combo wireless keyboard/mouse for initial configuration and testing. In the shot below (of an earlier config), I learned a lot just trying the get the machine stable with 3 cards.

Space_Heater_Early_Config_Initial_Fireup_small

Initial Testing on the Space Heater (3 GPUs installed). This test showed me that I needed better CPU cooling (hence I chucked that stock Intel cooler)

I also did some thermal testing along the way to make sure things weren’t getting too hot. It turns out this testing was a bit misleading, because the system was running a lot cooler with the side panel off than with it on.

Some Thermal Camera Images During Initial Burn-In (3 GPUs, stock CPU cooler):

Now that’s some heat coming out of this beast! Thankfully, the upgraded 14-gauge power plug and my watt meter aren’t at risk of melting, although they are pretty warm.

Once I had the machine up and running with all four GPUs the final configuration, I found that it produced about 55-95K PPD on average (based on the work unit), with the following breakdown

  • GTX 460: 10-20K PPD
  • GTX 480: 20-30K PPD each
  • GTX 580: 25-45 K PPD

Power consumption, as measured at the wall, ranged from 900 to 1000 watts with all 4 GPUs engaged. By turning different GPUs on and off, I could get varying levels of power (about 200 Watts idle. I typically ran it with one 580 and one 480 folding, for an average power consumption of about 600 watts).

Space_Heater_Power_Consumption

After running the machine for a while, my room was nice and toasty, as expected!

One thing that I should mention was the effect of the two additional intake fans that I mounted in the side panel. Originally I did not have these, and the top graphics card in the stack was hitting 97 degrees C according to the onboard monitoring! After modding this custom side-intake into the case (found a nice fan bracket on Amazon, and put my dremel tool to good use), the temps went down quite a lot. I used fan grilles on the inside of the fans to keep internal cables out of them, and mesh filters on the outside to match the intake filters on the rest of the case.

 

The top card stays under 85 degrees C (with the fan at 50%). The middle card stays under 80 degrees C, and the bottom card runs at 60 degrees C. The GTX 460 mounted in the drive bay never goes over 60 degrees C, but it’s a less powerful card and is mounted on the other side of the case.

Here’s some more pictures of the modded side panel, along with a little cooling diagram I threw together:

PPD, Wattage, and Efficiency Comparison

I debated about putting these plots in here, because the point of this machine was not primarily to make points (pun intended), or to be efficient from a PPD/Watt perspective. The point of this machine was to replace the 1500 watt space heater I use in the winter to keep a room warm.

As you can see, the scientific production (PPD) on this machine, even with 4 GPUs, is not all that impressive in 2020, since the GPUs being used are ten years old. Similarly, the efficiency (PPD/Watt) is terrible. There’s no surprise there, since it averages just under 1000 watts of power consumption at the wall!

Conclusion

It is totally possible to build a (relatively) inexpensive desktop computer out of old, used parts to use as a space heater. If the primary goal is to make heat, then this might not be a bad idea (although at $350, it still costs way more than a $20 heater from Walmart). The obvious benefit is that this sort of space heater is actually doing something useful besides keeping you warm (in this case, helping scientists learn more about diseases thanks to Folding@Home).

Other benefits that I found were the remote control (TeamViewer), which lets me use my cellphone to turn GPUs on and off to vary the heat output. Also, I think running this machine for extended durations in its medium-high setting (700 watts or so) is much healthier for the electrical wiring in my house vs. the constant cycling on and off of a traditional 1500 watt space heater.

From an environmental standpoint, you can do much worse than using electric heat. In my case, electric space heaters make a lot of sense, especially at night. I can shut off the entire heating zone (my house only has two zones) to the upstairs and just keep the bedroom warm. This drastically reduces my fossil fuel usage (good old New England, where home heating oil is the primary method of keeping warm in the winter). Since my house has an 8.23 KW solar panel array on the roof, a lot of my electricity comes directly from the sun, making this electric heat solution even greener.

Parting Thoughts:

I would not recommend running a machine like this during the warmer months. If warm air is not wanted, all the waste heat from this machine will do nothing but rack up your power bill for relatively little science being done. If you want to run an efficient summer-time F@H rig that uses low power (so as to not fight your AC) , check out my article on the GTX 1660 and 1650.

In a future article, I plan to show how I actually saved on heating costs by running Folding@Home space heaters all last winter (with a total of seven Folding@Home desktops placed strategically throughout my house, so that I hardly had to burn any oil).

 

Folding@Home on Turing (NVidia GTX 1660 Super and GTX 1650 Combined Review)

Hey everyone. Sorry for the long delay (I have been working on another writing project, more on that later…). Recently I got a pair of new graphics cards based on Nvidia’s new Turing architecture. This has been advertised as being more efficient than the outgoing Pascal architecture, and is the basis of the popular RTX series Geforce cards (2060, 2070, 2080, etc). It’s time to see how well they do some charitable computing, running the now world-famous disease research distributed computing project Folding@Home.

Since those RTX cards with their ray-tracing cores (which does nothing for Folding) are so expensive, I opted to start testing with two lower-end models: the GeForce GTX 1660 Super and the GeForce GTX 1650.

 

These are really tiny cards, and should be perfect for some low-power consumption summertime folding. Also, today is the first time I’ve tested anything from Zotac (the 1650). The 1660 super is from EVGA.

GPU Specifications

Here’s a quick table I threw together comparing these latest Turing-based GTX 16xx series cards to the older Pascal lineup.

Turing GPU Specs

It should be immediately apparent that these are very low power cards. The GTX 1650 has a design power of only 75 watts, and doesn’t even need a supplemental PCI-Express power cable. The GTX 1660 Super also has a very low power rating at 125 Watts. Due to their small size and power requirements, these cards are good options for small form factor PCs with non-gaming oriented power supplies.

Test Setup

Testing was done in Windows 10 using Folding@Home Client version 7.5.1. The Nvidia Graphics Card driver version was 445.87. All power measurements were made at the wall (measuring total system power consumption) with my trusty P3 Kill-A-Watt Power Meter. Performance numbers in terms of Points Per Day (PPD) were estimated from the client during individual work units. This is a departure from my normal PPD metric (averaging the time-history results reported by Folding@Home’s servers), but was necessary due to the recent lack of work units caused by the surge in F@H users due to COVID-19.

Note: This will likely be the last test I do with my aging AMD FX-8320e based desktop, since the motherboard only supports PCI Express 2.0. That is not a problem for the cards tested here, but Folding@Home on very fast modern cards (such as the GTX 2080 Ti) shows a modest slowdown if the cards are limited by PCI Express 2.0 x16 (around 10%). Thus, in the next article, expect to see a new benchmark machine!

System Specs:

  • CPU: AMD FX-8320e
  • Mainboard : Gigabyte GA-880GMA-USB3
  • GPU: EVGA 1080 Ti (Reference Design)
  • Ram: 16 GB DDR3L (low voltage)
  • Power Supply: Seasonic X-650 80+ Gold
  • Drives: 1x SSD, 2 x 7200 RPM HDDs, Blu-Ray Burner
  • Fans: 1x CPU, 2 x 120 mm intake, 1 x 120 mm exhaust, 1 x 80 mm exhaust
  • OS: Win10 64 bit

Goal of the Testing

For those of you who have been following along, you know that the point of this blog is to determine not only which hardware configurations can fight the most cancer (or coronavirus), but to determine how to do the most science with the least amount of electrical power. This is important. Just because we have all these diseases (and computers to combat them with) doesn’t mean we should kill the planet by sucking down untold gigawatts of electricity.

To that end, I will be reporting the following:

Net Worth of Science Performed: Points Per Day (PPD)

System Power Consumption (Watts)

Folding Efficiency (PPD/Watt)

As a side-note, I used MSI afterburner to reduce the GPU Power Limit of the GTX 1660 Super and GTX 1650 to the minimum allowed by the driver / board vendor (in this case, 56% for the 1660 and 50% for the 1650). This is because my previous testing, plus the results of various people in the Folding@Home forums and all over, have shown that by reducing the power cap on the card, you can get an efficiency boost. Let’s see if that holds true for the Turing architecture!

Performance

The following plots show the two new Turing architecture cards relative to everything else I have tested. As can be seen, these little cards punch well above their weight class, with the GTX 1660 Super and GTX 1650 giving the 1070 Ti and 1060 a run for their money. Also, the power throttling applied to the cards did reduce raw PPD, but not by too much.

Nvidia GTX 1650 and 1660 performance

Power Draw

This is the plot where I was most impressed. In the summer, any Folding@Home I do directly competes with the air conditioning. Running big graphics cards, like the 1080 Ti, causes not only my power bill to go crazy due to my computer, but also due to the increased air conditioning required.

Thus, for people in hot climates, extra consideration should be given to the overall power consumption of your Folding@Home computer. With the GTX 1660 running in reduced power mode, I was able to get a total system power consumption of just over 150 watts while still making over 500K PPD! That’s not half bad. On the super low power end, I was able to beat the GTX 1050’s power consumption level…getting my beastly FX-8320e 8-core rig to draw 125 watts total while folding was quite a feat. The best thing was that it still made almost 300K PPD, which is well above last generations small cards.

Nvidia GTX 1650 and 1660 Power Consumption

Efficiency

This is my favorite part. How do these low-power Turing cards do on the efficiency scale? This is simply looking at how many PPD you can get per watt of power draw at the wall.

Nvidia GTX 1650 and 1660 Efficiency

And…wow! Just wow. For about $220 new, you can pick up a GTX 1660 Super and be just as efficient than the previous generation’s top card (GTX 1080 Ti), which still goes for $400-500 used on eBay. Sure the 1660 Super won’t be as good of a gaming card, and it  makes only about 2/3’s the PPD as the 1080 Ti, but on an energy efficiency metric it holds its own.

The GTX 1650 did pretty good as well, coming in somewhere towards the middle of the pack. It is still much more efficient than the similar market segment cards of the previous generation (GTX 1050), but it is overall hampered by not being able to return work units as quickly to the scientists, who prioritize fast work with bonus points (Quick Return Bonus).

Conclusion

NVIDIA’s entry-level Turing architecture graphics cards perform very well in Folding@Home, both from a performance and an efficiency standpoint. They offer significant gains relative to legacy cards, and can be a good option for a budget Folding@Home build.

Join My Team!

Interested in fighting COVID-19, Cancer, Alzheimer’s, Parkinson’s, and many other diseases with your computer? Please consider downloading Folding@Home and joining Team Nuclear Wessels (54345). See my tutorial here.

How to Run Folding@Home on a Graphics Card in Windows 10

(A Folding at Home Unofficial Configuration Guide for GPU, Multi-GPU, and CPU/GPU Folding)

Folding@Home is a distributed computing project for fighting diseases. If you’re reading this post, they you are probably looking for some help getting Folding@Home running on your graphics card. GPU folding, when configured properly, is one of the best way to do tons of science efficiently. I hope this Folding@Home GPU Guide helps you start kicking butt against cancer and other diseases. So, let’s get started.

Note: for people who already have the Folding@Home client up and running and you want to switch from CPU folding to GPU folding, skip right to Step 3. Please note that if you are changing your hardware configuration on a machine that is already folding, it is courteous to let the existing work units finish by using the “finish” option on the client prior to re-arranging hardware. This keeps work units from being lost.

Step 0: System Requirements

Yes, we’re starting at zero, because computer indexing starts here too. Plus, before you even try this, you need the right stuff in the box.

Operating System

While Folding@Home supports many operating systems, this guide is aimed at Windows users. I’ll be using Windows 10, but the steps are the same for Windows 7.

Overall Computer

CPU
Give Me Efficiency or Give Me an Empty CPU Socket!

You do need to think about what goes in this socket, even if you’re GPU folding

Even though this is a guide about graphics card folding, the rest of your computer needs to be up to snuff to keep the card fed. Ideally, you want one dedicated CPU core for your overall Windows environment, plus one CPU core for each graphics card you want to run F@H on. So, for a 1-GPU computer, having two CPU cores available is optimal. A dual-GPU computer should have a 3 cores available, a three-GPU computer should have four cores available, etc. In terms of clock rate, almost all modern processors with clock rates above 2.0 GHz will work. Remember, we aren’t doing CPU folding here; the CPU just needs to be fast enough to keep the GPU fed.

Motherboard
Circuit City

Circuit City

Motherboards don’t matter too much, except that you should have a full-width PCI-Express x16 slot for each graphics card you want to fold on. When you get into really fast, new graphics cards like the RTX 2080,  a PCI-E 3.0 x16 slot will ensure the data flows fast enough to the card. PCI-Express 2.0 bandwidth will work with these ultra-fast cards, but there will be a slight bottleneck. Note I have never seen any bottlenecks with my GTX 1080 Ti on PCI-Express 2.0 x16 in Windows, but when adding a second card (using an x1 riser), I did see a slowdown on my Gigabyte 880-series socket AM3 board.

Memory

You should also aim to have 8 GB of ram (16 ideally), just because Windows tends to be a resource hog. Some people can fold just fine with 4 GB, but for this guide I am assuming you want to be able to use the machine as well. Memory channel configuration and speed doesn’t matter very much for Folding@Home, especially on GPUs.

Hard Drives

Any old hard drive with 60 GB or so of free space will do. The F@H client takes up almost no space. The 60 GB of free space is really just what you need for Windows 10 to not run really bad, regardless of what the machine is being used for.

Internet Connection

Almost anything works, as long as it doesn’t drop out.

Power Supply
PC P&C PSU

PC Power & Cooling SILENCER PSU

This is a critical and often overlooked component in the world of computational computing. I’ve written many articles on power supplies, so feel free to browse through my site to learn more. In short, make sure your system has enough PSU wattage to drive the video card, based on the video card’s recommendation. You’ll also need to make sure your power supply has the correct auxiliary power cables (PCI-Express 6-pin and/ or 8-pin) to supply enough current to cards requiring supplemental power.

For multiple cards, you’ll need more nameplate PSU wattage. Power supplies should be 80+ Bronze certified or better to help deliver power efficiently, because no one likes wasting money on misused electricity (and this hurts the environment). Also, you should try and stick with major manufacturers, such as (but not limited to) Corsair, Antec, Seasonic, Cooler Master, PC Power & Cooling, Thermaltake, etc.

Here are some common computer configurations and a reasonable power supply wattage to drive them:

  • 1 x Low-End GPU –> (GTX 1050, RX560, etc) –> 380 Watt PSU
  • 1 x Mid-Range GPU (GTX 1060, RX570, etc) –> 450 Watt PSU
  • 1 x High-End GPU (GTX 1080, Vega64, etc) –> 550 Watt PSU
  • 2 x Mid-Range GPUs  or 3 x Low-End GPUs–> 600 Watt PSU
  • 2 x High-End GPUs or 3 x Mid-Range GPUs –> 800 Watt PSU
  • 3 x High-End GPUs or  4 x Mid-Range GPUs–> 1000 Watt PSU
  • 4 x High-End GPUs (you’re crazy!) –> 1200+ Watt PSU

Saving the Planet Tip: Any PSU supplying an active load of 600 Watts or more should be 80+ Gold certified or better. This will minimize waste heat due to efficiency losses, which really start to add up for high power-draw computers.

Cooling

This is another overlooked requirement. Any computer doing 24/7 computations on a graphics card is going to get pretty toasty. Thankfully, most modern CPU cases come with enough space and fans to deal with this. You’ll want at least 1 dedicated 120 MM exhaust fan (not including the PSU fan) and one 120 MM intake fan to keep the air flowing. If you have dual graphics cards, having an intake fan right on the side panel blowing on the cards is one of the best way to keep a hot pocket of air from forming between the cards. Consider reference-style video cards (centrifugal 2-slot blower coolers) for multi-card setups to help dump the heat, since open-fan cards tend to just drown in their own heat if there isn’t enough airflow. I also recommend aftermarket coolers on CPUs, since your processor will be actively spooled up and feeding your graphics card. Yet, CPU cooling doesn’t need to be overkill.

Icy Opteron 4184

NOCTUA OVERKILL!

Graphics Card
Graphics Card Showdown: EVGA Nvidia Geforce GTX 1050 TI vs. Gigabyte AMD Radeon HD 7970 GHz Edition

Graphics Cards: You’ll Need One

First off, you should actually have a discrete graphics card. While F@H might run on some onboard / APU graphics solutions, the performance won’t be worth it, and you might as well just run CPU folding.

Folding@Home works on many discrete graphics cards that support OpenCL, but not all cards are supported. AMD RADEON HD 5xxx cards and Nvidia GeForce 4xx cards and newer are currently supported, but that can always change. See the project’s system requirements for a complete list. I personally recommend using Nvidia 9xxx series cards or AMD RX 5xx cards or newer, since these are more efficient than older hardware. My review of the GeForce 1080 Ti has some plots on efficiency and performance that might be helpful if you are selecting a card specifically for folding. Make sure you have the latest drivers for your card from either AMD or Nvidia.

Step 1: System Prep

Before even downloading Folding@Home, you should do a few basic things just to make sure the system is going to be stable for heavy computations. On the software side, this means updating drivers, making sure virus definitions and Windows updates are up to date, etc. On the hardware side, I recommend fully air-canning the dust out of your machine to optimize cooling. If the computer is older and the GPU you plan to use has been installed for a while, it’s worth taking the graphics card out and hitting it with some compressed air from all angles to clean out the heat sinks.

Step 2: Download and Install V7 Client

The Folding@Home V7 client can be found here:

The current client version is 7.5.1. Go ahead and install it. For this part, it’s basically just following the prompts. F@H’s default Windows install guide works well enough, and you can read that here. All of this can be configured later within the client (and this will be required for GPU folding). So, I’m linking to the standard install guide instead of regurgitating the steps, because I’m lazy I want this to be done identically to how Stanford * the F@H Consortium recommends it be done. If you don’t want to fold anonymously, select the “Set up an identity” button. You’ll want to pick a user name and enter a team number if you have one you’d like to join.

For example, if you wanted to join our team, you’d enter number 54345 in the team number field to join team Nuclear Wessels!

A note about Passkeys: you want one of these if you want to get lots of points and compete on the F@H leaderboards. Passkeys are a secure key that makes sure your points are your own (i.e. no one is using your username to generate points elsewhere). You need to have a Passkey if you want to be eligible for the Quick Return Bonus (more points given to users who do science quickly). You become eligible for the bonus once you have successfully completed ten work units and you have a valid passkey. You can get a Passkey here (but you don’t have to do this right away. Just like configuring your GPUs, it can be done later).

Step 3: Configure the Client for GPU Folding

FAH_Molecule

Now we are going to edit settings within the Advanced Control section of the Folding@Home client. To get here, look at your Windows task bar (next to the clock). Once F@H is installed, there should be a little molecule there. Right-click that bad boy and select “Advanced Control” to open the local client window.

Right-Click FAH

This opens up the client view. Here is what mine currently looks like (with GPU slots configured). Depending on how you got here, you might or might not have a team name and user identity displayed, and you might or might not have CPU folding enabled.

F@H Control V7

Go ahead and click the “Configure” button in the top-left of the window. Go to the “Identity” tab first.

Identity

Here, you can change any of the user info and team name info you entered when you installed the V7 client. You can also enter a Passkey if you have one (for those sweet, sweet Quick Return Bonus Points!).

Pitch: I’d be honored if you joined team # 54345 (Nuclear Wessels). We are currently doing everything we can to fight the COVID-19 coronavirus.

Nuclear Wessels Meme

Next, go one tab over to “Slots”. Here, you can see what devices Folding@Home plans to use (either CPU or GPU). For my setup, I have removed all CPU slots and added two GPU slots (one slot for my 980 Ti and one for my 1080 Ti). If you originally started folding on the CPU and want to switch to GPU folding, you can delete your CPU slot here and add GPU slot(s) for your graphics card(s).

Note: If you want to do mixed hardware folding (CPU + GPU), I will talk about that in Step 4.

Slots

The slot configuration window opens up when you add or edit a slot. Here are the options.

Slot Config Selecting the GPU button and leaving all the index settings at -1 is a good place to start. Nine times out of ten, the client will properly detect graphics cards this way. For my computer, adding two GPU slots with settings like this resulted in it properly detecting and folding on my installed GTX 980 Ti and GTX 1080 Ti cards.

In rare cases, the client might get confused. This happens in systems with onboard graphics (such as with AMD APUs). What happens is you are trying to fold on your discrete graphics card, and instead the F@H client is running the GPU slot on the APU. When this happens, I’ve found the easiest thing to do is reboot the computer, go into the BIOS, and disable the APU graphics from there, so that the client can’t even see the APU. Thus, the GPU slot with a -1 index defaults to the discrete graphics card.

Alternatively, you can use the gpu-index, opencl-index, and cuda-index boxes to try and get the slot to run on the correct graphics card. This is a trial and error process that is beyond the scope of this guide (leave me a comment if you need help with this, or ask someone in the Folding@Home Forums).

Advanced Slot Options

The Extra Slot Options (expert only) box on the bottom can sometimes help you eek a bit more performance out of the GPU slots. However, your mileage may vary. You can add or remove slot options with the + and – buttons on the bottom-right.

The settings I tend to add are these:

Advanced Options

Here, client-type advanced lets me get “late stage beta” work units, which might be a bit more unstable than normal work units, yet this helps the Folding@Home Consortium get new projects tested sooner. Max-Packet-Size Big (other options are “normal” and “small”) lets me download large molecules that will push the system a bit harder (more VRAM needed, more internet bandwidth, etc). Pause-on-start (value of “true” or “false”) tells the system to pause the folding slot when the computer boots (instead of automatically folding as soon as the machine is on). This is nice for when I want to kick folding off manually. Set this to “false” or leave it blank if you want the computer to fold automatically after a restart.

For a detailed list of these slot options, see the config guide here. Note: some of this is out of date.

Step 4 (Optional): Configure a CPU Slot as well

If you have CPU cores to spare, you can add a CPU folding slot in addition to the GPU slots. I recommend leaving 1 CPU core free for Windows background tasks (unless you are making a dedicated folding rig and don’t mind it being a bit slow to use). You should also keep 1 CPU core free for feeding each GPU that you have in your system. So, for my 8-core AMD FX-8320e with my two graphics cards, I could do something like this:

Total CPU Cores: 8

Cores needed for Windows: 1

Cores Needed for GPU Slots: 2 (one for each GPU)

Cores Remaining: = 8-1-2 = 5

So, theoretically, I can set my CPU folding slot to use 5 CPU cores. Now, an interesting fact is that in multi-core computing, prime numbers like 3, 5, and 7 do not work so well. Folding at home also doesn’t do well with high prime numbers, or multiples thereof (such as 14 threads, which is a multiple of prime number 7). It has to do with how all the data threads are stitched together.

For example, you get similar performance folding with 4 CPU cores as with 5 (4 is a nice base 2 number that computers like). In my case, for a non-dedicated folding rig, I set up a CPU slot with 4 CPU cores enabled, leaving two cores to handle whatever else the computer is doing and 2 cores to feed the graphics cards. Incidentally, if this were a guide about just setting up CPU folding, I would leave this box at “-1”.

4 CPU Core Config

Now, just hit the OK button and then save the slot configuration.

Save Slot Config

Step 5: Observe Slots Descriptions in the Client

Now, I can see that I have three slots (two GPU and one CPU) listed in the client window.

Ready Slots

Here, you should see that the CPU slot is using the number of threads you told it to use (4, in my case), and that the graphics cards are correctly identified. This all looks good.

Step 5: Watch it run!

Once you have your slots configured, you should be able to sit back and watch your computer fight disease with everything it’s got. One last thing: A helpful tool for graphics card monitoring is something like MSI Afterburner, or AMD’s built-in tool Wattman. It’s good to use these to make sure your card has enough thermal headroom to perform (keep it under 80 degrees C if you can!). If your card is thermally throttling, you’ll see an impact to folding@home PPD. I find that setting custom fan curves, or just setting the fan to run a bit faster than it normally would, is often enough to eliminate this.

Troubleshooting

The V7 client installer does the best job at detecting your specific graphics hardware during initial software installation. If you added a new graphics card that is not recognized, you should do a clean re-install of the V7 client. Write down your Name, Team Number, and Passkey, uninstall the client completely (including data), reinstall, and see if the new card is detected.

Some new graphics cards are also not immediately supported upon release. For example, the Radeon 5700 XT is only recently gaining support with advanced beta work units, but work is progressing to get this card fully supported (as of 3/2020). You can read up on which cards are supported and which aren’t yet on the GPU Whitelist Thread.

Leave me a comment if…

Did this guide help you? Did I miss something? Let me know how I can help and make this better by leaving a comment. Thanks!

-Chris

Addendum: Helpful Links to Other Tutorials

HFM.net – A remote monitoring program for F@H Clients

HFM.net monitoring tutorial (Youtube) – Video Tutorial by Frax1006

Teamviewer Guide – A remote desktop solution to let you log into folding machines and monitor / configure them. This is an excellent write-up by Pyroball.

Official F@H Advanced User Custom Installation Guide

Official F@H Configuration Guide

Overclocker’s Club F@H Guide

 

Folding@Home Review: NVIDIA GeForce GTX 1080 Ti

Released in March 2017, Nvidia’s GeForce GTX 1080 Ti was the top-tier card of the Pascal line-up. This is the graphics card that super-nerds and gamers drooled over. With an MSRP of $699 for the base model, board partners such as EVGA, Asus, Gigabyte, MSI, and Zotac (among others) all quickly jumped on board (pun intended) with custom designs costing well over the MSRP, as well as their own takes on the reference design.

GTX 1080 Ti Reference EVGA

EVGA GeForce GTX 1080 Ti – Reference

Three years later, with the release of the RTX 2080 Ti, the 1080 Ti still holds its own, and still commands well over $400 on the used market. These are beastly cards, capable of running most games with max settings in 4K resolutions.

But, how does it fold?

Folding@Home

Folding at home is a distributed computing project originally developed by Stanford University, where everyday users can lend their PC’s computational horsepower to help disease researchers understand and fight things like cancer, Alzheimer’s, and most recently the COVID-19 Coronavirus. User’s computers solve molecular dynamics problems in the background, which help the Folding@Home Consortium understand how proteins “misfold” to cause disease. For computer nerds, this is an awesome way to give (money–>electricity–>computer work–>fighting disease).

Folding at home (or F@H) can be run on both CPUs and GPUs. CPUs provide a good baseline of performance, and certain molecular simulations can only be done here. However, GPUs, with their massively parallel shader cores, can do certain types of single-precision math much faster than CPUs. GPUs provide the majority of the computational performance of F@H.

Geforce GTX 1080 Ti Specs

The 1080 Ti is at the top of Nvidia’s lineup of their 10-series cards.

1080 Ti Specs

With 3584 CUDA Cores, the 1080 Ti is an absolute beast. In benchmarks, it holds its own against the much newer RTX cards, besting even the RTX 2080 and matching the RTX 2080 Super. Only the RTX 2080 Ti is decidedly faster.

Folding@Home Testing

Testing is performed in my old but trusty benchmark machine, running Windows 10 Pro and using Stanford’s V7 Client. The Nvidia graphics driver version was 441.87. Power consumption measurements are taken on the system-level using a P3 Watt Meter at the wall.

System Specs:

  • CPU: AMD FX-8320e
  • Mainboard : Gigabyte GA-880GMA-USB3
  • GPU: EVGA 1080 Ti (Reference Design)
  • Ram: 16 GB DDR3L (low voltage)
  • Power Supply: Seasonic X-650 80+ Gold
  • Drives: 1x SSD, 2 x 7200 RPM HDDs, Blu-Ray Burner
  • Fans: 1x CPU, 2 x 120 mm intake, 1 x 120 mm exhaust, 1 x 80 mm exhaust
  • OS: Win10 64 bit

I did extensive testing of the 1080 Ti over many weeks. Folding@Home rewards donors with “Points” for their contributions, based on how much science is done and how quickly it is returned. A typical performance metric is “Points per Day” (PPD). Here, I have averaged my Points Per Day results out over many work units to provide a consistent number. Note that any given work unit can produce more or less PPD than the average, with variation of 10% being very common. For example, here are five screen shots of the client, showing five different instantaneous PPD values for the 1080 Ti.

 

GTX 1080 Ti Folding@Home Performance

The following plot shows just how fast the 1080 Ti is compared to other graphics cards I have tested. As you can see, with nearly 1.1 Million PPD, this card does a lot of science.

1080 Ti Folding Performance

GTX 1080 Ti Power Consumption

With a board power rating of 250 Watts, this is a power hungry graphics card. Thus, it isn’t surprising to see that power consumption is at the top of the pack.

1080 Ti Folding Power

GTX 1080 Ti Efficiency

Power consumption alone isn’t the whole story. Being a blog about doing the most work possible for the least amount of power, I am all about finding Folding@Home hardware that is highly efficient. Here, efficiency is defined as Performance Out / Power In. So, for F@H, it is PPD/Watt. The best F@H hardware is gear that maximizes disease research (performance) done per watt of power consumed.

Here’s the efficiency plot.

1080 Ti Folding Efficiency

Conclusion

The Geforce GTX 1080 Ti is the fastest and most efficient graphics card that I’ve tested so far for Stanford’s Folding@Home distributed computing project. With a raw performance of nearly 1.1 Million PPD in windows and an efficiency of almost 3500 PPD/Watt, this card is a good choice for doing science effectively.

Stay tuned to see how Nvidia’s latest Turing architecture stacks up.

Folding@Home: Nvidia GTX 1080 Review Part 3: Memory Speed

In the last article, I investigated how the power limit setting on an Nvidia Geforce GTX 1080 graphics card could affect the card’s performance and efficiency for doing charitable disease research in the Folding@Home distributed computing project. The conclusion was that a power limit of 60% offers only a slight reduction in raw performance (Points Per Day), but a large boost in energy efficiency (PPD/Watt). Two articles ago, I looked at the effect of GPU core clock. In this article, I’m experimenting with a different variable. Namely, the memory clock rate.

The effect of memory clock rate on video games is well defined. Gamers looking for the highest frame rates typically overclock both their graphics GPU and Memory speeds, and see benefits from both. For computation projects like Stanford University’s Folding@Home, the results aren’t as clear. I’ve seen arguments made both ways in the hardware forums. The intent of this article is to simply add another data point, albeit with a bit more scientific rigor.

The Test

To conduct this experiment, I ran the Folding@Home V7 GPU client for a minimum of 3 days continuously on my Windows 10 test computer. Folding@Home points per day (PPD) numbers were taken from Stanford’s Servers via the helpful team at https://folding.extremeoverclocking.com.  I measured total system power consumption at the wall with my P3 Kill A Watt meter. I used the meter’s KWH function to capture the total energy consumed, and divided out by the time the computer was on in order to get an average wattage value (thus eliminating a lot of variability). The test computer specs are as follows:

Test Setup Specs

  • Case: Raidmax Sagitta
  • CPU: AMD FX-8320e
  • Mainboard : Gigabyte GA-880GMA-USB3
  • GPU: Asus GeForce 1080 Turbo
  • Ram: 16 GB DDR3L (low voltage)
  • Power Supply: Seasonic X-650 80+ Gold
  • Drives: 1x SSD, 2 x 7200 RPM HDDs, Blu-Ray Burner
  • Fans: 1x CPU, 2 x 120 mm intake, 1 x 120 mm exhaust, 1 x 80 mm exhaust
  • OS: Win10 64 bit
  • Video Card Driver Version: 372.90

I ran this test with the memory clock rate at the stock clock for the P2 power state (4500 MHz), along with the gaming clock rate of 5000 MHz and a reduced clock rate of 4000 MHz. This gives me three data points of comparison. I left the GPU core clock at +175 MHz (the optimum setting from my first article on the 1080 GTX) and the power limit at 100%, to ensure I had headroom to move the memory clock without affecting the core clock. I verified I wasn’t hitting the power limit in MSI Afterburner.

*Update. Some people may ask why I didn’t go beyond the standard P0 gaming memory clock rate of 5000 MHz (same thing as 10,000 MHz double data rate, which is the card’s advertised memory clock). Basically, I didn’t want to get into the territory where the GDDR5’s error checking comes into play. If you push the memory too hard, there can be errors in the computation but work units can still complete (unlike a GPU core overclock, where work units will fail due to errors). The reason is the built-in error checking on the card memory, which corrects errors as they come up but results in reduced performance. By staying away from 5000+ MHz territory on the memory, I can ensure the relationship between performance and memory clock rate is not affected by memory error correction.

1080 Memory Boost Example

Memory Overclocking Performed in MSI Afterburner

Tabular Results

I put together a table of results in order to show how the averaging was done, and the # of work units backing up my +500 MHz and -500 MHz data points. Having a bunch of work units is key, because there is significant variability in PPD and power consumption numbers between work units. Note that the performance and efficiency numbers for the baseline memory speed (+0 MHz, aka 4500 MHz) come from my extended testing baseline for the 1080 and have even more sample points.

Geforce 1080 PPD Production - Ram Study

Nvidia GTX 1080 Folding@Home Production History: Data shows increased performance with a higher memory speed

Graphic Results

The following graphs show the PPD, Power Consumption, and Efficiency curves as a function of graphics card memory speed. Since I had three points of data, I was able to do a simple three-point-curve linear trendline fit. The R-squared value of the trendline shows how well the data points represent a linear relationship (higher is better, with 1 being ideal). Note that for the power consumption, the card seems to have used more power with a lower memory clock rate than the baseline memory clock. I am not sure why this is…however, the difference is so small that it is likely due to work unit variability or background tasks running on the computer. One could even argue that all of the power consumption results are suspect, since the changes are so small (on the order of 5-10 watts between data points).

Geforce 1080 Performance vs Ram Speed

Geforce 1080 Power vs Ram Speed

Geforce 1080 Efficiency vs Ram Speed

Conclusion

Increasing the memory speed of the Nvidia Geforce GTX 1080 results in a modest increase in PPD and efficiency, and arguably a slight increase in power consumption. The difference between the fastest (+500 MHz) and slowest (-500 MHz) data points I tested are:

PPD: +81K PPD (11.5%)

Power: +9.36 Watts (3.8%)

Efficiency: +212.8 PPD/Watt (7.4%)

Keep in mind that these are for a massive difference in ram speed (5000 MHz vs 4000 MHz).

Another way to look at these results is that underclocking the graphics card ram in hopes of improving efficiency doesn’t work (you’ll actually lose efficiency). I expect this trend will hold true for the rest of the Nvidia Pascal series of cards (GTX 10xx), although so far my testing of this has been limited to this one card, so your mileage may vary. Please post any insights if you have them.

NVIDIA GEFORCE GTX 1080 Folding@Home Review (Part 1)

Intro

It’s hard to believe that the Nvidia GTX 1080 is almost three years old now, and I’m just getting around to writing a Folding@Home review of it. In the realm of graphics cards, this thing is legendary, and only recently displaced from the enthusiast podium by Nvidia’s new RTX series of cards. The 1080 was Nvidia’s top of the line gaming graphics card (next to the Ti edition of course), and has been very popular for both GPU coin mining and cancer-curing (or at least disease research for Stanford University’s charitable distributed computing project: Folding@Home). If you’ve been following along, you know it’s that second thing that I’m interested in. The point of this review is to see just how well the GTX 1080 folds…and by well, I mean not just raw performance, but also energy efficiency.


Quick Stats Comparison

I threw together a quick table to give you an idea of where the GTX 1080 stacks up (I left the newer RTX cards and the older GTX 9-series cards off of here because I’m lazy…

Nvidia Pascal Cards

Nvidia Pascal Family GPU Comparison

As you can see, the GTX 1080 is pretty fast, eclipsed only by the GTX 1080 Ti (which also has a higher Thermal Design Power, suggesting more electricity usage). From my previous articles, we’ve seen that the more powerful cards tend to do work more efficiency, especially if they are in the same TDP bracket. So, the 1080 should be a better folder (both in PPD and PPD/Watt efficiency) than the 1070 Ti I tested last time.

Test Card: ASUS GeForce GTX 1080 Turbo

As with the 1070 Ti, I picked up a pretty boring flavor of a 1080 in the form of an Asus turbo card. These cards lack back plates (which help with circuit board rigidity and heat dissipation) and use cheap blower coolers, which suck in air from a single centrifugal fan on the underside and blow it out the back of the case (keeping the hot air from building up in the case). These are loud, and tend to run hotter than open-fan coolers, so overclocking and boost clocks are limited compared to aftermarket designs. However, like Nvidia’s own Founder’s Edition reference cards, this reference design provides a good baseline for a 1080’s minimum performance.

ASUS GeForce GTX 1080 Turbo

ASUS GeForce GTX 1080 Turbo

The new 1080 looks strikingly similar to the 1070 Ti…Asus is obviously reusing the exact same cooler since both cards have a 180 Watt TDP.

Asus GTX 1080 and 1070 Ti

Asus GTX 1080 and 1070 Ti (which one is which?)

Test Environment

Like most of my previous graphics card testing, I put this into my AMD FX-Based Test System. If you are interested in how this test machine does with CPU folding, you can read about it here. Testing was done using Stanford’s Folding@Home V7 Client (version 7.5.1) in Windows 10. Points Per Day (PPD) production was collected from Stanford’s servers. Power measurements were done with a P3 Kill A Watt Meter (taken at the wall, for a total-system power profile).

Test Setup Specs

  • Case: Raidmax Sagitta
  • CPU: AMD FX-8320e
  • Mainboard : Gigabyte GA-880GMA-USB3
  • GPU: Asus GeForce 1080 Turbo
  • Ram: 16 GB DDR3L (low voltage)
  • Power Supply: Seasonic X-650 80+ Gold
  • Drives: 1x SSD, 2 x 7200 RPM HDDs, Blu-Ray Burner
  • Fans: 1x CPU, 2 x 120 mm intake, 1 x 120 mm exhaust, 1 x 80 mm exhaust
  • OS: Win10 64 bit
  • Video Card Driver Version: 372.90

Video Card Configuration – Optimize for Performance

In my previous articles, I’ve shown how Nvidia GPUs don’t always automatically boost their clock rates when running Folding@home (as opposed to video games or benchmarks). The same is true of the GTX 1080. It sometimes needs a little encouragement in order to fold at the maximum performance. I overclocked the core by 175 MHz and increased the power limit* by 20% in MSI afterburner using similar settings to the GTX 1070. These values were shown to be stable after 2+ weeks of testing with no dropped work units.

*I also experimented with the power limit at 100% and I saw no change in card power consumption. This makes sense…folding is not using 100% of the GPU. Inspection of the MSI afterburner plots shows that while folding, the card does not hit the power limit at either 100% or 120%. I will have to reduce the power limit to get the card to throttle back (this will happen in part 2 of this article).

As with previous cards, I did not push the memory into its performance zone, but left it at the default P2 (low-power) state clock rate. The general consensus is that memory clock does not significantly affect folding@home, and it is better to leave the power headroom for the core clock, which does improve performance. As an interesting side-note, the memory clock on this thing jumps up to 5000 Mhz (effective) in benchmarks. For example, see the card’s auto-boost settings when running Heaven:

1080 Benchmark Stats

Nvidia GeForce GTX 1080 – Boost Clocks (auto) in Heaven Benchmark

Testing Overview

For most of my tests, I just let the computer run folding@home 24/7 for a couple of days and then average the points per day (PPD) results from Stanford’s stats server. Since the GTX 1080 is such a popular card, I decided to let it run a little longer (a few weeks) to get a really good sampling of results, since PPD can vary a lot from work unit to work unit. Before we get into the duration results, let’s do a quick overview of what the Folding@home environment looks like for a typical work unit.

The following is an example screen shot of the display from the client, showing an instantaneous PPD of about 770K, which is very impressive. Here, it is folding on a core 21 work unit (Project 14124).

F@H Client 1080

Folding@Home V7 Client – GeForce GTX 1080

MSI Afterburner is a handy way to monitor GPU stats. As you can see, the GPU usage is hovering in the low 80% region (this is typical for GPU folding in Windows. Linux can use a bit more of the GPU for a few percentage points more PPD). This Asus card, with its reference blower cooler, is running a bit warm (just shy of 70 degrees C), but that’s well within spec. I had the power limit at 120%, but the card is nowhere near hitting that…the power limit seems to just peak above 80% here and there.

GTX 1080 MSI Afterburner

GTX 1080 stats while folding.

Measuring card power consumption with the driver shows that it’s using about 150 watts, which seems about right when compared to the GPU usage and power % graphs. 100% GPU usage would be ideal (and would result in a power consumption of about 180 watts, which is the 1080’s TDP).

In terms of card-level efficiency, this is 770,000 PPD / 150 Watts = 5133 PPD/Watt.

Power Draw (at the card)

Nvidia Geforce GTX 1080 – Instantaneous Power Draw @ the Card

Duration Testing

I ran Folding@Home for quite a while on the 1080. As you can see from this plot (courtesy of https://folding.extremeoverclocking.com/), the 1080 is mildly beating the 1070 Ti. It should be noted that the stats for the 1070 Ti are a bit low in the left-hand side of the plot, because folding was interrupted a few times for various reasons (gaming). The 1080 results were uninterrupted.

1080 Production History

Geforce GTX 1080 Production History

Another thing I noticed was the amount of variation in the results. Normal work unit variation (at least for less powerful cards) is around 10-20 percent. For the GTX 1080, I saw swings of 200K PPD, which is closer to 30%. Check out that one point at 875K PPD!

Average PPD: 730K PPD

I averaged the PPD over two weeks on the GTX 1080 and got 730K PPD. Previous testing on the GTX 1070 Ti (based on continual testing without interruptions) showed an average PPD of 700K. Here is the plot from that article, reproduced for convenience.

Nvidia GTX 1070 Ti Time History

Nvidia GTX 1070 Ti Folding@Home Production Time History

I had expected my GTX 1080 to do a bit better than that. However, it only has about 5% more CUDA cores than the GTX 1070 Ti (2560 vs 2438). The GTX 1080’s faster memory also isn’t an advantage in Folding@Home. So, a 30K PPD improvement for the 1080, which corresponds to about a 4.3% faster, makes sense.

System Average Power Consumption: 240 Watts @ the Wall

I spot checked the power meter (P3 Kill A Watt) many times over the course of folding. Although it varies with work unit, it seemed to most commonly use around 230 watts. Peek observed wattage was 257, and minimum was around 220. This was more variation than I typically see, but I think it corresponds with the variation in PPD I saw in the performance graph. It was very tempting to just say that 230 watts was the number, but I wasn’t confident that this was accurate. There was just too much variation.

In order to get a better number, I reset the Kill-A-Watt meter (I hadn’t reset it in ages) and let it log the computer’s usage over the weekend. The meter keeps track of the total kilowatt-hours (KWH) of energy consumed, as well as the time period (in hours) of the reading. By dividing the energy by time, we get power. Instead of an instantaneous power (the eyeball method), this is an average power over the weekend, and is thus a compatible number with the average PPD.

The end result of this was 17.39 KWH consumed over 72.5 hours. Thus, the average power consumption of the computer is:

17.39/72.5 (KWH/H) * 1000 (Watts/KW) = about 240 Watts (I round a bit for convenience in reporting, but the Excel sheet that backs up all my plots is exact)

This is a bit more power consumed than the GTX 1070 Ti results, which used an average of 225 watts (admittedly computed by the eyeball method over many days, but there was much less variation so I think it is valid). This increased power consumption of the GTX 1080 vs. the 1070 Ti is also consistent with what people have seen in games. This Legit Reviews article shows an EVGA 1080 using about 30 watts more power than an EVGA 1070 Ti during gaming benchmarks. The power consumption figure is reproduced below:

LegitReviews_power-consumption

Modern Graphics Card Power Consumption. Source: Legit Reviews

This is a very interesting result. Even though the 1080 and the 1070 Ti have the same 180 Watt TDP, the 1080 draws more power, both in folding@home and in gaming.

System Computational Efficiency: 3044 PPD/Watt

For my Asus GeForce GTX 1080, the folding@home efficiency is:

730,000 PPD / 240 Watts = 3044 PPD/Watt.

This is an excellent score. Surprisingly, it is slightly less than my Asus 1070 Ti, which I found to have an efficiency of 3126 PPD/Watt. In practice these are so close that it just could be attributed to work unit variation. The GeForce 1080 and 1070 Ti are both extremely efficient cards, and are good choices for folding@home.

Comparison plots here:

GeForce 1080 PPD Comparison

GeForce GTX 1080 Folding@Home PPD Comparison

GeForce 1080 Efficiency Comparison

GeForce GTX 1080 Folding@Home Efficiency Comparison

Final Thoughts

The GTX 1080 is a great card. With that said, I’m a bit annoyed that my GTX 1080 didn’t hit 800K PPD like some folks in the forums say theirs do (I bet a lot of those people getting 800K PPD use Linux, as it is a bit better than Windows for folding). Still, this is a good result.

Similarly, I’m annoyed that the GTX 1080 didn’t thoroughly beat my 1070 Ti in terms of efficiency. The results are so close though that it’s effectively the same. This is part one of a multi-part review, where I tuned the card for performance. In the next article, I plan to go after finding a better efficiency point for running this card by experimenting with reducing the power limit. Right now I’m thinking of running the card at 80% power limit for a week, and then at 60% for another week, and reporting the results. So, stay tuned!

Folding@Home Efficiency vs. GPU Power Limit

Folding@Home: The Need for Efficiency

Distributed computing projects like Stanford University’s Folding@Home sometimes get a bad rap on account of all the power that is consumed in the name of science.  Critics argue that any potential gains that are made in the area of disease research are offset by the environmental damage caused by thousands of computers sucking down electricity.

This blog hopes to find a balance by optimizing the way the computational research is done. In this article, I’m going to show how a simple setting in the graphics card driver can improve Folding@Home’s Energy Efficiency.

This blog uses an Nvidia graphics card, but the general idea should also work with AMD cards. The specific card here is an EVGA GeForce GTX 1060 (6 GB).  Green F@H Review here: Folding on the NVidia GTX 1060

If you are folding on a CPU, similar efficiency improvements can be achieved by optimizing the clock frequencies and voltages in the BIOS.  For an example on how to do this, see these posts:

F@H Efficiency: AMD Phenom X6 1100T

F@H Efficiency: Overclock or Undervolt?

(at this point in time I really just recommend folding on a GPU for optimum production and efficiency)

GPU Power Limit Overview

The GPU Power limit slider is a quick way to control how much power the graphics card is allowed to draw. Typically, graphics cards are optimized for speed, with efficiency a second goal (if at all). When a graphics card is pushed harder, it will draw more power (until it runs into the power limit). Today’s graphics cards will also boost their clock rate when loaded, and reduce it when the load goes away. Sometimes, a few extra MHz can be achieved for minimal extra power, but go too far and the amount of power needed to drive the card will grow exponentially. Sure the card is doing a bit more work (or playing a game a bit faster), but the heaps of extra power needed to do this are making it very inefficient.

What I’m going to quickly show is that going the other way (reducing power) can actually improve efficiency, albeit at a reduction of raw output. For  this quick test, I’m just going to look a the default power limit, 100%, vs 50%. Specific tuning is going to be dependent on your actual graphics card. But, with a few days at different settings, you should be able to find a happy balance between performance and efficiency.

For these plots, I used my watt meter to obtain actual power consumption at the wall. You can read about my watt meters here.

Changing the Power Limit

A tool such as MSI Afterburner can be used to view the graphics card’s settings, including the power limit. In the below screenshot, I reduced the card’s power limit by 50% midway through taking data. You can clearly see the power consumption and GPU temperature drop. This suggests the entire computer should be drawing less power from the wall. I confirmed this with my watt meter.

Adjust Power Limit MSI Afterburner

MSI Afterburner is used to reduce the graphics card’s power limit.

Effect on Results

I ran the card for multiple days at each power setting and used Stanford’s actual stats to generate an averaged number for PPD. Reporting an average number like this lends more confidence that the results are real, since PPD as reported in the client varies a lot with time, and PPD can bounce around by +/- 10 percent with different projects.

Below is the production time history plot, courtesy of https://folding.extremeoverclocking.com/. I marked on the plot the actual power consumption numbers I was seeing from my computer at the wall. As you can see, reducing the power limit on the 1060 from 100% to 50% saved about 40 watts of power at the wall.

GTX 1060 F@H Reduced Power Limit Production

GTX 1060 Folding@Home Performance at 100% and 50% Power

On the efficiency plot, you can see that reducing the power limit on the 1060 actually improved its efficiency slightly. This is a great way to fold more effectively.

Nvidia 1060 PPD per Watt Updated

NVidia GTX 1060 Folding@Home Efficiency Results

There is a downside of course, and that is in raw production. The Points Per Day plot below shows a pretty big reduction in PPD for the reduced power 1060, although it is still beating its little brother, the 1050 TI. One of the reasons PPD falls off so hard is that Stanford provides bonus points that are tied to how fast your computer can return a work unit. These points increase exponentially the faster your computer can do work. So, by slowing the card down, we not only lose on base points, but we lose on  the quick return bonus as well.

Nvidia 1060 PPD Updated

NVidia GTX 1060 Folding@Home Performance Results

Conclusion

Reducing the power limit on a graphics card can increase its computational energy efficiency in Folding@Home, although at the cost of raw PPD. There is probably a sweet spot for efficiency vs. performance at some power setting between 50% and 100%. This will likely be different for each graphics card. The process outlined above can be used for various power limit settings to find the best efficiency point.

 

Folding on the Nvidia GTX 1070

Overview

Folding@home is Stanford University’s charitable distributed computing project. It’s charitable because you can donate electricity, as converted into work through your home computer, to fight cancer, Alzheimer’s, and a host of other diseases.  It’s distributed, because anyone can run it with almost any desktop PC hardware.  But, not all hardware configurations are created equally.  If you’ve been following along, you know the point of this blog is to do the most work for as little power consumption as possible.  After all, electricity isn’t free, and killing the planet to cure cancer isn’t a very good trade-off.

Today we’re testing out Folding@home on an EVGA NVIDIA GTX 1070 graphics card.  This card offers a big step up in gaming and compute horsepower compared to the 1060 I reviewed previously, and is capable of pushing solid frame rates at 4K resolution. So, how well does it fold?

Card Specifications (Nvidia Reference Specs)

1070 specs

Nvidia GTX 1070 Specifications

evga 1070 acx stock photo

EVGA Nvidia GTX 1070 ACX 3.0 (photo credit: EVGA)

FOLDING@HOME TEST SETUP

For this test I used my normal desktop computer as the benchmark machine.  Testing was done using Stanford’s V7 client on Windows 10 64-bit running FAH Core 21 work units.  The video driver version used was initially 388.59, and subsequently 372.90. Power consumption measurements reported in the charts were taken at the wall and are thus full system power consumption numbers.

If you’re interested in reading about the hardware configuration of my test rig, it is summarized in this post:

https://greenfoldingathome.com/2017/04/21/cpu-folding-revisited-amd-fx-8320e-8-core-cpu/

Information on my watt meter readings can be found here:

I Got a New Watt Meter!

Initial Testing and Troubleshooting

Like the GTX 1060, the 1070 uses Nvidia’s Pascal architecture, which is very efficient and has a reputation for solid compute performance. The 1070 has 50% more CUDA cores than the 1060, and with Folding@Home’s exponential points system (the quick return bonus gives you more points for doing work quickly), we should see roughly double the PPD of the 1060, which does 300 – 350 thousand PPD depending on the work unit. Based on various people’s experiences, and especially this forum post, I was expecting the 1070 to produce somewhere in the range of 600-700K PPD.

That wasn’t what happened. The card wasn’t exactly slow, but initial testing showed an estimated 450 to 550K PPD, as reported by the client. I ran it for a few days, since PPD can vary a good deal depending on the work unit, but the result was unfortunately the same. 550K PPD was about as much as my card would do.

initial_1070_results

Initial GTX 1070 Results – 544K PPD

At first I thought it might be due to the card running hot. Unlike my test of a brand new 1060, I obtained my 1070 used off of eBay for a great price of $200 dollars + shipping. It was a little dusty, so I blew it all out and fired up MSI Afterburner to check out the temps. Unfortunately, the fans on the card weren’t even breaking a sweat, and it was nice and cool. Points didn’t increase.

evga 1070 acx 3.0

My Used EVGA GTX 1070 ACX 3.0 – eBay Price: $200

initial 1070 afterburner report

MSI Afterburner Report: NVidia GTX 1070, Stock Clocks, Driver 388.59

After doing some more digging, I ran across a few threads online that indicated the 1070 (along with a few other GTX models) don’t always boost up to their maximum clock rates for compute loads. Opening up a video, or Folding@home’s protein viewer, can sometimes force the card to clock up. I tried this and didn’t have any luck. My card was running at the stock clocks, and in fact the memory even appeared to be running 200 Megahertz below the 4000 Mhz reference clock rate. This suggested the card was in a low-power mode.

Thankfully, Nvidia’s System Management Interface tool can be used to see what is going on. This tool, which in Windows 10 lives in C:\Program Files\Nvidia Corporation, can be accessed by the command line. I followed the tutorial here to learn a few things about what my 1070 was doing. Although that write-up is geared at people mining for cryptocurrency, the steps are still releveant.

As can be seen here, my card was in the “P2” state, which is not the high-performance “P0” state. This is why the card wasn’t boosting, and why the memory clock seems diminished.

1070 performance state

Nvidia 1070 Performance State

Another feature of the Nvidia System Management Interface is the ability to get the power consumption at the card. This is measured by the driver, using the card’s hardware, and is the total instantaneous power the card is consuming (PCI slot power + supplemental power connections). As you can see, in the P2 state, the card is very rarely nearing the 150 watt TDP.

Now, this doesn’t necessarily mean the card would get closer to 150 watts in the P0 state. F@H does not utilize every portion of the graphics card, and it is expected that the power consumption would not be right at the limit. Still, these numbers seemed a bit low to me.

1070 card-level power consumption (before tuning)

1070 card-level power consumption (before tuning)

Overclocking Manually to Approximate P0 State

Unlike what was suggested in that crypto mining article, I wasn’t able to use the NVSMI tool to force a P0 state. For some reason, my NVSMI tool wouldn’t show me the available clock rate settings for my 1070. However, manual overclocking with a program such as MSI Afterburner is really easy. By maxing out the power limit and setting the core clock to a higher value, I can basically make the card run at its boost frequency, or higher.

First, I set the power limit to the maximum allowed (112%). Don’t worry, this won’t hurt anything. It is limited in the driver to not cause any damage. Basically, this will allow the card to sip a bit more electricity (albeit at a reduction of efficiency). For a card that was in the P0 state (say, running a video game), this would allow higher boost clocks.

Next, I started upping the core clock in increments of 100 Mhz. I didn’t run into any stability problems, and settled in on a core clock of 2000 Mhz (factory clock is 1506 Mhz / 1683 boost). Note that that factory boost number is deceiving, since the latest drivers will crank the GPU core up past 1900 MHz if there is power and voltage headroom. From what I read, many people can run the 1070 stable at 2050 Mhz without adding voltage.

I decided not to boost the voltage, and to stay 50 Mhz below that supposedly stable number, because it’s not worth risking the stability of Folding@home. We want accurate, repeatable science! Plus, dropping work units is much worse for PPD than running slightly below a card’s maximum capability.

I experimented with clocking the memory up from 3800 MHz to 4000 MHz (note it’s double data rate so this equates to 8000 MHz as reported by some programs). This didn’t seem to affect results. F@H has historically been fairly insensitive to memory clocks, and boosting memory too much can cause slowdowns due to the error-checking routines having to work harder to ensure clean results. Basically, everyone says it’s not worth it. I ran it at 4000 MHz long enough to confirm this (a day), then throttled it back down to 3800 MHz. The benefit here will be more power available for the GPU cores, which is what really counts for folding.

Here are my final overclock numbers. The card has been running with these clocks for a week and a half non-stop, with no stability issues:

final 1070 afterburner report

Overclocked Settings: +160 MHz Core, 112% Power Limit

Note the driver version as shown in the updated Afterburner screen shot is different…as it turns out, this can have a huge effect on F@H PPD. More on that in a moment.

Overclocking Result: An Extra 50,000 PPD

Running the core at 2012 MHz (+160 MHz boost from the P2 power state) and upping the card’s power limit by 12% made the average PPD, as observed over two days, climb from 500-550K PPD to 550K-600K PPD. So, that’s a 50,000 PPD increase for minimal effort. But, something still seemed off. At the time I was still running driver version 388.59, and one of the things I had discovered when searching around for 1070 tuning tips is that not all drivers are created equal.

Nvidia Driver 372.90: The Best Folding Driver for the GTX 1070

Nvidia has been updating drivers with more and more emphasis on gaming optimizations and less on compute. So, it makes sense that older drivers might actually offer better compute performance. There are many threads in the Folding@Home Hardware Forum discussing this, and one driver version that keeps being mentioned is 372.90. It’s a bit tricky to keep it installed on Windows 10, since Windows is always trying to push a newer version, but for my 24/7 folding rig, I installed it and simply never rebooted it in order to get a week’s worth of data.

This driver change alone seemed to also offer a 50,000 point boost. After running various core 21 work units, the GTX 1070’s PPD has stayed between 630,000 and 660,000. This is normal variation between work units, and I feel confident reporting a final PPD of 640K. As I write this, the client is estimating 660K PPD.

final_1070_results

Nvidia GTX 1070: 660K PPD on Project 13815 (Core 21)

This is an excellent result. It’s twice the PPD of the GTX 1060, although eking out that last 100K PPD took a manual overclock plus a driver “update” to an older version.

Now, for the fun part. Efficiency! This 1070 is rated at 150 watts, which is only 30 watts more than the 1060. So we are supposedly doing 100% more science for Stanford University, and for a meager 25% increase in power consumption. Time to bust out the watt meter and find out!

Power Consumption at the Wall

Using my P3 Kill-A-Watt Power Meter, I measured the total system power consumption. This is the same way I measure all of my graphics cards (as opposed to estimating the card’s power by the TDP or using the video card driver to spit out instantaneous card power). The reason is that I like to have a full-system view, factoring in the power usage of my CPU, main board, and RAM, all essential components to keep the card happy.

While folding with the GTX 1070, my system’s total power draw varied between 225 and 230 watts. I’m going to go with 227 watts as the average power number. 

Efficiency

Computing computational efficiency as Points Per Day (PPD) / Power (Watts) gives:

640,000 PPD / 227 Watts = 2820 PPD/Watt.

Conclusion

The Nvidia GTX 1070 is a very efficient card for running Stanford’s Folding@Home Distributed Computing Project. The trend established in my previous articles seems to be continuing, namely that the more expensive high-end video cards are more efficient, despite their higher power draw. In this case of the 1070, some manual overclocking was needed to unlock the full PPD potential. As proven by many others, the default drivers weren’t very good, but the 372.90 drivers really opened it up.

Base PPD: 550,000

Tuned PPD (drivers + overclock) = 640,000

PPD/Watt(@wall) = 2820

1070 ppd plot

Nvidia GTX 1070 Performance Comparison

1070 efficiency plot

Nvidia 1070 Efficiency Comparison

As a final note, this post focused more on PPD than efficiency, since for much of the testing my watt meter was not installed (my kids keep playing with it). At some point in the future, I’ll do an article where I tune one of these cards to find the best efficiency point. This will likely be at a lower power limit than 100%, with perhaps a slight reduction in clock rate.

Is Folding@Home a Waste of Electricity?

Folding@home has brought together thousands of people (81 thousand active folders as of the time of this writing, as evidenced from Stanford’s One in a Million contributor drive.) This is awesome…tens of thousands of people teaming up to help researchers unravel the mysteries of terrible diseases.

But, there is a cost. If you are reading this blog, then you know the cost of scientific computing projects such as Folding@Home is environmental. In trying to save ourselves from the likes of cancer and Alzheimer’s disease, we are running a piece of software that causes our computers to use more electricity. In the case of dedicated folding@home computers, this can be hundreds of watts of power consumed 24/7. It adds up to a lot of consumed power, that in the end exits your computer as heat (potentially driving up your air conditioning costs as well).

Folding on Graphics Card Thermal

FLIR Thermal Cam – Folding@Home on Graphics Card

If Stanford reaches their goal of 1 million active folders, then we have an order of magnitude more power consumption on our hands. Let’s do some quick math, assuming each folder contributes 200 watts continuous (low compared to the power draw of most dedicated Folding@home machines). In this case, we have 200 watts/computer * 24 hours/day * 365 days/year * 1,000,000 computers *1 kilowatt-hour/1000 watt-hours = 1,752,000,000 kilowatt-hours of power consumed in a year, in the name of Science!

That’s almost two billion kilowatt-hours, people.  It’s 1.75 terawatt-hours (TWh)! Using the EPA’s free converter can put that into perspective. Basically, this is like driving 279 thousand extra cars for a year, or burning 1.5 billion pounds of coal.  Yikes!

https://www.epa.gov/energy/greenhouse-gas-equivalencies-calculator

F@H Energy Equivalence

Potential Folding@Home Environmental Impact

Is all this disease research really harming the planet? If it is, is it worth it? I don’t know. It depends on the outcome of the research, the potential benefit to humans, and the detriment to humans, animals, and the environment caused by that research. This opens up all sorts of what-if scenarios.

For example: what if Folding@Home does help find a future cure for many diseases, which results in extended life-spans. Then, the earth gets even more overpopulated than it is already. Wouldn’t the added environmental stresses negatively impact people’s health? Conversely, what if Folding@Home research results in a cure for a disease that allows a little girl or boy to grow to adulthood and become the inventor of some game-changing green technology?

It’s just not that easy to quantify.

Then, there is the topic of Folding@home vs. other distributed computing projects. Digital currency, for example. Bitcoin miners (and all the spinoffs) suck up a ton of power. Current estimates put Bitcoin alone at over 40 TWH a year.

Source: https://www.theguardian.com/technology/2018/jan/17/bitcoin-electricity-usage-huge-climate-cryptocurrency

That’s more power than some countries use, and twenty times more than my admittedly crude future Folding@home estimate. When you consider that the cryptocurrency product has only limited uses (many of which are on the darkweb for shady purposes), it perhaps helps cast Folding@home in a better light.

There is always room for improvement thought. That is the point of this entire blog. If we crazies are committed to turning our hard-earned dollars into “points”, we might as well do it in the most efficient way possible. And, while we’re at it, we should consider the environmental cost of our hobby and think of ways to offset it (that goes for the Bitcoin folks too).

I once ran across a rant on another online blog about how Folding@home is killing the planet. This was years ago, before the Rise of the Crypto. I wish I could find that now, but it seems to have been lost in the mists of time, long since indexed, ousted, and forgotten by the Google Search Crawler. In it, the author bemoaned over how F@H was murdering mother earth in the name of science. I recall thinking to myself, “hey, they’ve got a point”. And then I realized that I had already done a bunch of things to help combat the rising electric bill, and I bet most distributed computing participants have done some of these things too.

These things are covered elsewhere in this blog, and range from optimizing the computer doing the work to going after other non-folding@home related items to help offset the electrical and environmental cost. I started by switching to LED light-bulbs, then went to using space heaters instead of whole house heating methods in the winter. As I upgraded my Folding@home computer, I made it more energy efficient not just for F@H but for all tasks executed on that machine.

In the last two years, my wife and I bought a house, which gave us a whole other level of control over the situation. We had one of those state-subsidized energy audits done. They put in some insulation and air-sealed our attic, thus reducing our yearly heating costs. Eventually, we even decided to put solar panels on the roof and get an electric car (these last two weren’t because I felt guilty about running F@H, but because my wife and I are just into green technologies). We even use our Folding@home computer as a space heater in the winter, thus offsetting home heating oil use and negating any any environmental arguments against F@H in the winter months.

In conclusion, there is no doubt that distributed projects have an environmental cost. However, to claim that they are a waste of electricity or that they are killing the planet might be taking it too far. One has to ask if the cause is worth the environmental impact, and then figure out ways to lessen that impact (or in some cases get motivated to offset it completely. Solar powered folding farm, anyone?)

Solar Panel in Basement

LG 320 Solar Panel in my basement, awaiting roof install.

Folding on the NVidia GTX 1060

Overview

Folding@home is Stanford University’s charitable distributed computing project. It’s charitable because you can donate electricity, as converted into work through your home computer, to fight cancer, Alzheimer, and a host of other diseases.  It’s distributed, because anyone can run it with almost any desktop PC hardware.  But, not all hardware configurations are created equally.  If you’ve been following along, you know the point of this blog is to do the most work for as little power consumption as possible.  After all, electricity isn’t free, and killing the planet to cure cancer isn’t a very good trade-off.

Today we’re testing out Folding@home on EVGA’s single-fan version of the NVIDIA GTX 1060 graphics card.  This is an impressive little card in that it offers a lot of gaming performance in a small package.  This is a very popular graphics card for gamers who don’t want to spend $400+ on GTX 1070s and 1080s.  But, how well does it fold?

Card Specifications

Manufacturer:  EVGA
Model #:  06G-P4-6163
Model Name: EVGA GeForce GTX 1060 SC GAMING (Single Fan)
Max TDP: 120 Watts
Power:  1 x PCI Express 6-pin
GPU: 1280 CUDA Cores @ 1607 MHz (Boost Clock of 1835 MHz)
Memory: 6 GB GDDR5
Bus: PCI-Express X16 3.0
MSRP: $269

06G-P4-6163-KR_XL_4

EVGA Nvidia GeForce GTX 1060 (photo by EVGA)

Folding@Home Test Setup

For this test I used my normal desktop computer as the benchmark machine.  Testing was done using Stanford’s V7 client on Windows 7 64-bit running FAH Core 21 work units.  The video driver version used was 381.65.  All power consumption measurements were taken at the wall and are thus full system power consumption numbers.

If you’re interested in reading about the hardware configuration of my test rig, it is summarized in this post:

https://greenfoldingathome.com/2017/04/21/cpu-folding-revisited-amd-fx-8320e-8-core-cpu/

Information on my watt meter readings can be found here:

I Got a New Watt Meter!

FOLDING@HOME TEST RESULTS – 305K PPD AND 1650 PPD/WATT

The Nvidia GTX 1060 delivers the best Folding@Home performance and efficiency of all the hardware I’ve tested so far.  As seen in the screen shot below, the native F@H client has shown up to 330K PPD.  I ran the card for over a week and averaged the results as reported to Stanford to come up with the nominal 305K Points Per Day number.  I’m going to use 305 K PPD in the charts in order to be conservative.  The power draw at the wall was 185 watts, which is very reasonable, especially considering this graphics card is in an 8-core gaming rig with 16 GB of ram.  This results in a F@H efficiency of about 1650 PPD/Watt, which is very good.

Screen Shot from F@H V7 Client showing Estimated Points per Day:

1060 TI Client

Nvidia GTX 1060 Folding @ Home Results: Windows V7 Client

Here are the averaged results based on actual returned work units

(Graph courtesy of http://folding.extremeoverclocking.com/)

1060 GTX PPD History

NVidia 1060 GTX Folding PPD History

Note that in this plot, the reported results previous to the circled region are also from the 1060, but I didn’t have it running all the time.  The 305K PPD average is generated only from the work units returned within the time frame of the red circle (7/12 thru 7/21)

Production and Efficiency Plots

Nvidia 1060 PPD

NVidia GTX 1060 Folding@Home PPD Production Graph

Nvidia 1060 PPD per Watt

Nvidia GTX 1060 Folding@Home Efficiency Graph

Conclusion

For about $250 bucks (or $180 used if you get lucky on eBay), you can do some serious disease research by running Stanford University’s Folding@Home distributed computing project on the Nvidia GTX 1060 graphics card.  This card is a good middle ground in terms of price (it is the entry-level in NVidia’s current generation of GTX series of gaming cards).  Stepping up to a 1070 or 1080 will likely continue the trend of increased energy efficiency and performance, but these cards cost between $400 and $800.  The GTX 1060 reviewed here was still very impressive, and I’ll also point out that it runs my old video games at absolute max settings (Skyrim, Need for Speed Rivals).  Being a relatively small video card, it easily fits in a mid-tower ATX computer case, and only requires one supplemental PCI-Express power connector.  Doing over 300K PPD on only 185 watts, this Folding@home setup is both efficient and fast. For 2017, the NVidia 1060 is an excellent bang-for-the-buck Folding@home Graphics Card.

Request: Anyone want to loan me a 1070 or 1080 to test?  I’ll return it fully functional (I promise!)