Tag Archives: PPD/Watt

AMD Ryzen 9 3950x Part 4: Full Throttle Folding with CPB Overclocking and SMT

This is part four of my Folding@Home review for AMD’s top-tier desktop processor, the Ryzen 9 3950x 16-core CPU. Up until recently, this was AMD’s absolute beast-mode gaming and content creation desktop processor. If you happen to have one, or are looking for a good CPU to fight COVID and Cancer with, you’ve come to the right place.

Folding@Home is a distributed computing project where users can donate computational runtime on their home computers to fight diseases like Cancer, Alzheimer’s, Mad-Cow, and many others. For better or for worse, COVID-19 caused an explosion of F@H popularity, because the project was retooled to focus on understanding the coronavirus molecule to aid researches develop ways to fight it. This increase in users caused Folding@Home to become (once again) the most powerful supercomputer in the world. Of course this comes with a cost: namely, in the form of electricity. Most of my articles to date have focused on GPU folding. However, the point of this series of articles is to investigate how someone running CPU folding can optimize their settings to do the most work for the least amount of power, thus reducing their power bill and reducing the environmental impact of all this computing.

In the last part of this review, I investigated the differences seen between running Folding@Home with SMT (also known as Hyperthreading) on and off. The conclusion from that review was that performance does scale with virtual cores, and that the best science-fighting and energy efficiency is seen with 30 or 32 threads enabled on the CPU folding slot.

The previous testing was all performed with Core Performance Boost off. CPB is the AMD equivalent of Intel’s Turbo Boost, which is basically automatic, dynamic overclocking of the processor (both CPU frequency and voltage) based on the load on the chip. Keeping CPB turned off in previous testing resulted in all tests being run with the CPU frequency at the base 3.5 GHz.

In this final article, I enabled CPB to allow the Ryzen 9 3950x to scale its frequency and voltage based on the load and the available thermal and power headroom. Note that for this test, I used the default AMD settings in the BIOS of my Asus Prime X570-P motherboard, which is to say I did not enable Precision Boost Overdrive or any other setting to increase the automatic overclocking beyond the default power and thermal limits.

Test Setup

As with the other parts of this review, I used my new Folding@Home benchmark machine which was previously described in this post. The only tweaks to the computer since that post was written were the swap outs of a few 120mm fans for different models to improve cooling and noise. I also eliminated the 80 mm side intake fan, since all it did was disrupt the front-to-back airflow around the CPU and didn’t make any noticeable difference in temperatures. All of these cooling changes made less than a 2 watt difference in the machine’s idle performance (almost unmeasurable), so I’m not going to worry about correcting the comparison plots.

Because it’s been a while since I wrote about this, I figured I’d recap a few things from the previous posts. The current configuration of the machine is:

  • Case: Raidmax Sagitta
  • Power Supply: Seasonic Prime 750 Watt Titanium
  • Intake Cooling: 2 x 120mm fan (front)
  • Exhaust Cooling: 1 x 120 mm (rear) + PSU exhaust (top)
  • CPU Cooler: Noctua NH-D15 SE AM4
  • CPU: AMD Ryzen 9 3950x
  • Motherboard: Asus Prime X570-P
  • Memory: 32 GB Corsair Vengeance LPX DDR4 3600 MHz
  • GPU: Zotac Nvidia GeForce 1650 installed for CPU testing
  • OS Drive: Samsung 970 Evo Plus 512 GB NVME SSD
  • Storage Drive #1: Samsung 860 EVO 2TB SSD
  • Storage Drive #2: Western Digital Blue 128 GB NVME SSD
  • Optical Drive: Samsung SH-B123L Blu-Ray Drive
  • Operating System: Windows 10 Home

The Folding@Home software client used was version 7.6.13.

Test Methodology

The point of this testing is to identify the best settings for performance and energy efficiency when running Folding@Home on the Ryzen 3950x 16-core processor. To do this, I set the # of threads to a specific value between 1 and 32 and ran five work units. For each work unit, I recorded the instantaneous points per day (PPD) as reported in the client, as well as power consumption of the machine as reported on my P3 Kill A Watt meter. I repeated this 32 times, for a total of 160 tests. By running 5 tests at each nCPU setting, some of the work unit variability can be averaged out.

The Number of CPU threads can be set by editing the slot configuration

Folding@Home Performance: Ryzen 9 3950X

Folding@Home performance is measured in Points Per Day (PPD). This is the numbe that most people running the project are most interested in, as generating lots of PPD means your machine is doing a lot of good science to aid the researchers in their fight against diseases. The following plot shows the trend of Points Per Day vs. # of CPU threads engaged. The average work unit variation came out to being around 12%…this results in a pretty significant spread in performance between different work units at higher thread counts. As in the previous testing, I plotted a pair of boundary lines to capture the 95% confidence interval, meaning that assuming a Gaussian distribution of data points, 95% of the work units will perform between in this boundary region.

AMD Ryzen 9 3950X Folding@Home Performance: Core Performance Boost and Simultaneous Multi-Threading Enabled

As can be seen in the above plot, in general, the Folding@Home client’s Points Per Day production increases with increasing core count. As with the previous results, the initial performance improvement is fairly linear, but once the physical number of CPU cores is exceeded (16 in this case), the performance improvement drops off, only ramping up again when the core settings get into the mid 20’s. This is really strange behavior. I suspect it has something to do with how Windows 10 schedules logical process threads onto physical CPU cores, but more investigation is needed.

One thing that is different abut this test is that the Folding@Home consortium started releasing new work units based on the A8 core. These work units support the AVX2_256 instruction set, which allows some mathematical operations to be performed more efficiently on processors that support AVX2 (specifically, an add operation and a multiply operation can be performed at the same time). As you can see, the Core A8 work units, denoted by purple dots, fall far above the average performance and the 95% confidence interval lines. Although it is awesome that the Folding@Home developers are constantly improving the software to take advantages of improved hardware and computer programming, this influx of fancy work units really slowed my testing down! There were entire days when all I would get were core A8 units, when I really need core A7 units to compare to my previous testing. Sigh…such is the price of progress. Anyway, these work units were excluded from the 5-work unit averages composing each data point, since I want to be able to compare the average performance line to previous testing, which did not include these new work units.

As noted in my previous posts, some settings of the # of CPU threads result in the client defaulting to a lower thread count to prevent numerical problems that can arise for certain mathematical operations. For reference, the equivalent thread settings are shown in the table below:

Equivalent Thread Settings:

The Folding@Home Client Adjusts the Thread Count to Avoid Numerical Problems Arising with Prime Numbers and Multiples Thereof…

Folding@Home Power Consumption

Here is a much simpler plot. This is simply the power consumption as reported by my P3 Kill A Watt meter at the wall. This is total system power consumption. As expected, it increases with increasing core count. Since the instantaneous power the computer is using wobbles around a bit as the machine is working, I consider this to be an “eyeball averaged” plot, with an accuracy of about 5 watts.

AMD Ryzen 9 3950X Folding@Home Power Consumption: Core Performance Boost and Simultaneous Multi-Threading Enabled

As can be seen in the above plot, something interesting starts happening at higher thread counts: namely, the power consumption plateaus. This wasn’t seen in previous testing with Core Performance Boost set to off. Essentially, with CPB on, the machine is auto-overclocking itself within the factory defined thermal and power consumption limits. Eventually, with enough cores being engaged, a limit is reached.

Investigating what is happening with AMD’s Ryzen Master software is pretty enlightening. For example, consider the following three screen shots, taken during testing with 2, 6, and 16 threads engaged:

2 Thread Solve:

AMD Ryzen Master: Folding@Home CPU Folding, 2 Threads Engaged

6 Thread Solve

AMD Ryzen Master: Folding@Home CPU Folding, 6 Threads Engaged

16 Thread Solve

AMD Ryzen Master: Folding@Home CPU Folding, 16 Threads Engaged

First off, please notice that the temperate limit (first little dial indicator) is never hit during any test condition, thanks to the crazy cooling of the Noctua NH-D15 SE. Thus, we don’t have to worry about an insufficient thermal solution marring the test results.

Next, have a look at the second and third dial indicators. For the 2-core solve, the peak CPU speed is a blistering 4277 MHz! This is a factory overclock of 22% over the Ryzen 9 3950x’s base clock of 3500 MHz. This is Core Performance Boost in action! At this setting, with only 2 CPU cores engaged, the total package power (PPT) is showing 58% use, which means that there is plenty of electrical headroom to add more CPU cores. For the 6-core solve, the peak CPU speed has come down a bit to 4210 MHz, and the PPT has risen to 79% of the rated 142 watt maximum. What’s happening is the extra CPU cores are using more power, and the CPU is throttling those cores back a bit to keep everything stable. Still, there is plenty of headroom.

That story changes when you look at the plot for the 16-thread solve. Here, the peak clock rate has decreased to 4103 MHz and the total package power has hit the limit at 142 watts (a good deal beyond the 105 watt TDP of the 3950X!). This means that the Core Performance Boost setting has pushed the clocks and voltage as high as can be allowed under the default auto-overclocking limits of CPB. This power limit on the CPU is the reason the system’s wall power consumption plateaus at 208 watts.

If you’re wondering what makes up the difference between the 208 watts reported by my watt meter and the 142 watts reported by Ryzen Master, the answer is the rest of the system besides the CPU socket. In other words, the motherboard, memory, video card, fans, hard drives, optical drive, and the power supply’s efficiency.

Just for fun, here is the screen shot of Ryzen Master for the full 32-core solve!

AMD Ryzen Master: Folding@Home CPU Folding, 32 Threads Engaged

Here, we have an all-core peak frequency of 3855 MHz. Interestingly, the CPU temp and PPT have decreased slightly from the 16-core solve, even though the processor is theoretically working harder. What’s happening here is yet another limit has been reached. Look at the 6th dial indicator labeled ‘TDC’. This is a measure of the instantaneous peak current, in Amperes, being applied to the CPU. Apparently with 32 threads, this peak current limit of 95 amps is getting hit, so clock speed and voltage is reduced, resulting in a lower average socket power (PPT) than the 16-core solve.

Folding@Home Efficiency

Now for my favorite plot…Efficiency! Here, I am taking the average performance in PPD (excluding the newfangled A8 work units for now) and dividing it by the system’s wall power consumption. This provides a measure of how much work per unit of power (PPD/Watt) the computer is doing.

AMD Ryzen 9 3950X Folding@Home Efficiency: Core Performance Boost and Simultaneous Multi-Threading Enabled

This plot looks fairly similar to the performance plot. In general, throwing more CPU threads at the problem lets the computer do more work in a unit of time. Although higher thread counts consume more power than lower thread counts, the additional power use is offset by the massive amount of extra computational work being done. In short, effiency improves as thread count improves.

There is a noticeable dent in the curve however, from 15 to 23 threads. This is this interesting region where things get weird. As I mentioned before, I think what might be happening is some oddity in how Windows 10 schedules jobs once the physical number of CPU threads has been exceeded. I’m not 100% sure, but what I think Windows is doing is potentially juggling the threads around to keep a few physical CPU cores free (basically, it’s putting two threads on one CPU core, i.e. utilizing SMT, even when it doesn’t have to, in order to keep some CPU cores available for other tasks, such as using Windows). It isn’t until we get over 24 threads that Windows decides we are serious about running all these jobs, and reluctantly schedules the jobs out for pure performance.

I do have some evidence to back up this theory. Investigating what is going on with Ryzen Master with Folding@Home set to 20 threads is pretty telling.

AMD Ryzen Master: Folding@Home CPU Folding, 32 Threads Engaged

Since 20 threads exceeds the 16-core capacity of the processor, one would think all 16 cores would be spun up to max in order to get through this work as fast as possible. However, that is not the case. Only 12 cores are clocked up. Now, if you consider SMT, these 12 cores can handle 24 threads of computation. So, virtual cores are being used as well as physical cores to handle this 20-thread job. This obviously isn’t ideal from a performance or an efficiency standpoint, but it makes sense considering what Windows 10 is: a user’s operating system, not a high performance computing operating system. By keeping some physical CPU cores free when it can, Microsoft is hoping to ensure users a smooth computing experience.

Comparison to Previous Results

The above plots are fun and all, but the real juice is the comparison to the previous results. As a reminder, these were covered in detail in these posts:

SMT On, CPB Off

SMT Off, CPB Off

Performance Comparison

In the previous parts of this article, the difference between SMT (aka Hyperthreading) being on or off was shown to be negligible on the Ryzen 9 3950x in the physical core region (thread count = 16 or less). The major advantage of SMT was it allowed more solver threads to be piled on, which eventually results in increased performance and efficiency for thread counts above 25. In the plot below, the third curve basically shows what the effect of overclocking is. In this case, Core Performance Boost, AMD’s auto-overclocking routine, provides a fairly uniform 10-20 percent improvement. This diminishes for high core count settings though, becoming a nominal 5% improvement above 28 cores. It should be noted that the effects of work unit to work unit variation are still apparent, even with five averages per test case, so don’t try to draw any specific conclusions at any one thread count. Rather, just consider the overall trend.

AMD Ryzen 9 3950X Folding@Home Performance Comparison: Various Settings

Power Comparison

The power consumption plot shows a MASSIVE difference between wall power being used for the CPB testing vs the other two tests. This shouldn’t come as a surprise. Overclocking a processor’s frequency requires more voltage. Within a given transistor cycle, the Average Voltage * Average Current = Average Power, so for a constant current being supplied to the CPU socket, an increase in voltage increases the power being consumed. This is compounded by the transistor switching frequency going up as well (due to the increased frequency), which also results in a higher average power consumption due to there being more transistor switching activities occurring in a given unit of time.

In short, we are looking at a very noticable increase in your electrical bill to run Folding@Home on an overclocked machine.

AMD Ryzen 9 3950X Folding@Home Power Comparison: Various Settings

Efficiency Comparison

Efficiency is the whole point of this article and this blog, so behold! I’ve shown in previous articles both on CPUs and GPUs that overclocking typically hurts efficiency (and conversely, that underclocking and undervolting improves efficiency). The story doesn’t change with factory automatic overclocking routines like CPB. In the below, it is clear that and here we have a very strong case for disabling Core Performance Boost, since it is up to 25% less efficient when enabled.

AMD Ryzen 9 3950X Folding@Home Efficiency Comparison: Various Settings

Conclusion

The Ryzen 9 3950x is a very good processor for fighting disease with Folding@Home. The high core count produces exceptional efficiency numbers for a CPU, with a setting of 30 threads being ideal. Leaving 2 threads free for the rest of Windows 10 doesn’t seem to hurt performance or efficiency too much. Given the work unit variation, I’d say that 30 and 32 threads produce the same result on this processor.

As far as optimum settings, to get the most bang for electrical buck (i.e. efficiency), running that 30-thread CPU slot requires SMT to be enabled. Disabling CPB, which is on by default, results in a massive efficiency improvement by cutting over 50 watts off the power consumption. For a dedicated folding computer running 24/7, shaving that 50 watts off the electric bill would save 438 kWh/year of energy. In my state, that would save me $83 annually, and it would also save about 112 lbs of CO2 from being released into the atmosphere. Imagine the environmental impact if the 100,000+ computers running Folding@Home could each reduce their power consumption by 50 watts by just changing a setting!

Future Work

If there is one thing to be said about overclocking a Ryzen 3xxx-series processor, it’s that the possibilities are endless. A downside to disabling CPB is that if you aren’t folding all the time, your processor will be locked at its base clock rate, and thus your single-threaded performance will suffer. This is where things like PBO come in. PBO = Precision Boost Overdrive. This is yet another layer on top of CPB to fine-tune the overclocking while allowing the system to run in automatic mode (thus adapting to the loads that the computer sees). Typically, people use PBO to let the system sustain higher clock rates than standard CPB would allow. However, PBO also allows a user to enter in power, thermal, and voltage targets. Theoretically, it should be possible to set up the system to allow frequency scaling for low CPU core counts but to pull down the power limit for high core-counts, thus giving a boost to lightly threaded jobs while maintaining high core count efficiency. This is something I plan to investigate, although getting comparable results to this set of plots is going to be hard due to the prevalence of the new AVX2 enabled work units.

Maybe I’ll just have to do it all over again with the new work units? Sigh…

AMD Ryzen 9 3950X Folding@Home Review: Part 2: Averaging, Efficiency, and Variation

Welcome back everyone! In my last post, I used my rebuilt benchmark machine to revisit CPU folding on my AMD Ryzen 9 3950x 16-core processor. This article is a follow-on. As promised, this includes the companion power consumption and efficiency plots for thread settings of 1-32 cores. As a quick reminder, I did this test with multi-threading (SMT) on, but with Core Performance Boost disabled, so all cores are running at the base 3.5 GHz setting.

Performance

The Folding@Home distributed computing project has come a long way from its humble disease-fighting beginnings back in 2000. The purpose of this testing is to see just how well the V7 CPU client scales on a modern, high core-count processor. With all the new Folding@Home donors coming onboard to fight COVID, having some insight into how to set up the configuration for the most performance is hopefully helpful.

For this test, I simply set the # of threads the client can use to a value and ran five sequential work units. I averaged the performance (Points Per Day), but I also plot the individual work unit performance values to give you a sense of the variation. Since the Ryzen 9 3950x supports 32 threads, I essentially ran 160 tests. Since I wanted the Folding@Home Consortium to get useful data in their fight against COVID-19, I let each work unit run to completion, even though I only need them to run to about 10-20% complete to get an accurate PPD estimate from the client.

So, without further blabbing on my part, here is the graph of Folding@Home performance vs. thread count in Windows 10 on the Ryzen 9 3950x

Ryzen_3950x_Performance_SMT_Off_CPB_On

Here, the solid blue line is the averaged performance, and the gray circles are the individual tests. The dashed blue lines represent a statistical 95% confidence interval, which is computed based on the variation. The expected Points Per Day (PPD) of a work unit run on the 3950x is expected to fall within this band 95% of the time.

My first observation is, holy crap! This is a fast processor. Some work units at high thread counts get really close to 500K PPD, which for me has only been achievable by GPU folding up to this point.

My second observation is that there is a lot of variation between different work units. This makes sense, because some work units have much larger molecules to solve than others. In my testing, I found the average variation of all 160 tests to be 12.78%, with individual variance up to 25%.

My third observation is that there seems to be two different regions on this plot. For the first half, the thread count setting is less than the number of physical cores on the chip, and the results are fairly linear. For the second half, the thread count setting is higher than the number of physical cores on the chip (thus forcing the CPU to virtualize those cores using SMT). Performance seems to fall off when the CPU cores become fully saturated (threads = 16), and it takes a while to climb out of the hole (threads = 24 starts showing some more gains).

As a side note, the client does not actually run all of these thread count settings, since some prime numbers, especially large primes (7, 11) and multiples thereof cause numerical issues. For example, when you try to run a 7-thread solve, the client automatically backs the thread count down to 6. You can see warnings in the log file about this when it happens.

Prime Number Thread Adjust

I noted all the relevant thread counts where this happens on the x-axis of the plot. Theoretically, these should be equivalent settings. The fact that the average performance varies a bit between them is just due to work unit variation (I’d have to run hundreds of averages to cancel all the variation out).

Finally, I noticed that the highest PPD actually occurred with a thread count of 30 (PPD = 407200) vs a thread count of 32 (PPD = 401485). This is a small but interesting difference, and is within the range of statistical variation. Thus I would say that setting the thread count to 30 vs 32 provides the same performance, while leaving two CPU threads free for other tasks (such as GPU folding…more on that later!).

Power Consumption

Power consumption numbers for each thread setting were taken at the wall, using my P3 Kill A Watt meter. Since the power numbers tend to walk around a bit as the computer works, it’s hard to get an instantaneous reading. Thus these are “eyeball averaged”. There was enough change at each CPU thread setting to clearly see a difference (not counting those thread settings that are actually equivalent to an adjacent setting).

Ryzen_3950x_Power_SMT_Off_CPB_On

The total measured power consumption rose fairly linearly from just under 80 watts to just under 160 watts. There’s not too much surprising here. As you throw more threads at the CPU, it clocks up idle cores and does more work (which causes more transistors to switch, which thus takes more power). This seems pretty believable to me. At the high end, the system is drawing just under 160 watts of power. The AMD Ryzen 9 3950x is rated at a 105 watt TDP, and with CPB turned off it should be pretty close to this number. My rough back of the hand calculation for this rig was as follows:

  1. CPU Loaded Power = 105 Watts
  2. GPU Idle Power (Nvidia GTX 1650) = 10 Watts
  3. Motherboard Power = 15 Watts
  4. Ram Power = 2 watts * 4 sticks = 8 watts
  5. NVME Power = 2 watts * 2 drives = 4 watts
  6. SSD Power = 2 watts

Total Estimated Watts @ F@H CPU Load = 144 Watts

Factor in a boat load of case fans, some silly LED lights, and a bit of PSU efficiency hit (about 90% efficient for my Seasonic unit) and it’ll be close to the 160 watts as measured.

Efficiency

This being a blog about saving the planet while still doing science with computers, I am very interested in energy efficiency. For Folding@Home, this means at doing the most work (PPD) for the least amount of power (watts). So, this plot is just PPD/Watts. Easy!

Similar to the PPD plot, this efficiency plot averages five data points for each thread setting. I chose to leave off the individual points and the confidence interval, because that looks about the same on this plot as it does on the PPD plot, and leaving all the clutter off makes this easier to read.

Ryzen_3950x_Efficiency_SMT_Off_CPB_On

As with the PPD plot, there seem to be two regions on the efficiency curve. The first region (threads less than 16) shows a pretty good linear ramp-up in efficiency as more threads are added. The second region (threads 16 or greater) is what I’m calling the “core saturation” region. Here, there are more threads than physical cores, and efficiency stays relatively flat. It actually drops off at 16 cores (similar to the PPD plot), and doesn’t start improving again until 24 or more threads are allocated to the solver.

This plot, at first glance, suggests that the maximum efficiency is realized at # of threads = 30. However, it should be noted that work unit variation still has a lot of influence, even with reporting results of a 5-sample average. You can see this effect by looking at the efficiency drop at threads = 31. Theoretically, the efficiency should be the same at threads = 31 and threads = 30, because the solver runs a 30-thread solution even when set to 31 to prevent domain decomposition.

Thus, similar to the PPD plot, I’d say the max efficiency is effectively achieved at thread counts of 30 and 32. My personal opinion is that you might as well run with # of threads = 30 (leaving two threads free for other tasks). This setting results in the maximum PPD as well.

Weird Results at Threads = 16-23

Some of you might be wondering why the performance and efficiency drops off when the thread count is set to the actual number of cores (16) or higher. I was too, so I re-ran some tests and looked at what was happening with AMD’s built-in Ryzen Master tool. As you can see in the screen shot below, even though the # of threads was set to 18 in Folding@Home (a number greater than the 16 physical cores), not all 16 cores were fully engaged on the processor. In fact, only 14 were clocked up, and two were showing relatively lazy clock rates.

Two Cores are Lazy!

Folding@Home 18-Thread CPU Solve on 16-Core Processor

I suspect what is happening is that some of the threads were loaded onto “virtual” CPU cores (i.e. SMT / hyper threading). This might be something Windows 10 does to preserve a few free CPU cores for other tasks. In fact, I didn’t see all of the cores turbo up to full speed until I set Folding@Home’s thread count to 24. This incidentally is when performance starts coming back in on the plots above.

This weird SMT / Hyper-threading behavior is likely what is responsible for the large drop-off / flat part of the performance and efficiency curves that exists from thread count = 16 to 23. As you can see in the picture below, once you fully load all the available threads, the CPU frequencies on each core all hit the maximum value, as expected.

Ryzen_Master_32_Thread_Solve

Folding@Home 32-Thread CPU Solve on 16-Core Processor

Results Comparison

The following plots compare overall performance, power consumption, and efficiency of my new AMD Ryzen 9 3950x Folding@Home rig to other hardware configurations I have tested so far.

Performance

As you can see from the plot below, the Ryzen 9 3950x running a 32-thread Folding@Home solve can compete with relatively modern graphics cards in terms of raw performance. High-end GPUs will still offer more performance, but for a processor, getting over 400K PPD is very impressive. This is significantly more PPD than the previous processors I have tested (AMD Bulldozer-based FX-8320e, AMD Phenom II X6 1100t, Intel Core2Quad Q6600, etc). Admittedly I have not tested very many CPUs, since this is much more involved than just swapping out graphics cards to test.

AMD Ryzen 9 3950x Performance

Power Consumption

From a total system power consumption standpoint, my new benchmark machine with the AMD Ryzen 9 3950x has a surprisingly low total power draw when running Folding. Another interesting point is that since the 3950x lacks onboard graphics, I had to have a graphics card installed to get display. In my case, I had the Nvidia GTX 1650 installed, since this is a relatively low power consumption card that should provide minimal overhead. As you can see below, folding on the 3950x CPU (with the 1650 GPU idle) uses nearly the same amount of power as folding on the 1650 GPU (with the 3950x idle).

AMD Ryzen 9 3950x Power Consumption

Efficiency

Efficiency is the point of this blog, and in this respect the 3950x comes in towards the upper middle of the pack of hardware configurations I have tested. It’s definitely the most efficient processor I have tested so far, but graphics cards such as the 1660 Super and 1080 Ti are more efficient. Despite drawing more total power from the wall, these high-end GPUs do a lot more science.

Still, a PPD/Watt of over 2500 is not bad, and in this case the 3950x is more efficient than folding on the modest GPU installed in the same box (the Nvidia GTX 1650). Compared to the much older AMD FX-8320e, the Ryxen 9 3950x is 14x more efficient! What a difference 7 years can make!

AMD Ryzen 9 3950x Efficiency

Conclusion

The 16-core, 32-thread AMD Ryzen 9 3950x is one fast processor, and can do a lot of science for the Folding@Home distributed computing project. Although mid to high-end graphics cards such as the 1080 Ti ($450 on the used market) can outperform the $700 3950x in terms of performance and efficiency, it is still important to have a smattering of high-end CPU folding rigs on the Folding@Home network, because some molecules can only be solved on CPUs.

There is a general trend of increasing efficiency and performance as the # of CPU threads allocated to Folding@Home increases. For the Ryzen 9 3950x, using a setting of 30 or 32 threads is recommended for maximum performance and efficiency. If you plan on using your computer for other tasks, or for simultaneously folding on the GPU, 30 threads is the ideal CPU slot setting.

Please Support My Blog!

If you are interested in measuring the power consumption of your own computer (or any device), please consider purchasing a P3 Kill A Watt Power Meter from Amazon. You’ll be surprised what a $35 investment in a watt meter can tell you about your home’s power usage, and if you make a few changes based on what you learn you will save money every year! Using this link won’t cost you anything extra, but will provide me with a small percentage of the sale to support the site hosting fees of GreenFolding@Home.

If you enjoyed this article, perhaps you are in the market for an AMD Ryzen 9 3950x or similar Ryzen processor. If so, please consider using one of the links below to buy one from Amazon. Thanks for reading!

AMD Ryzen 9 3950x Direct Link

AMD Ryzen (Amazon Search)

Future Work

In the next article, I’ll disable multithreading (SMT) to see the effect of virtualized CPU cores on Folding@Home performance.

Later, I plan to enable core performance boost on the 3950x to see what effect the automatic clock frequency and voltage overclocking has on Folding@Home performance and efficiency.

 

 

New Folding@Home Benchmark Machine: It’s RYZEN TIME!

Folding@Home, the distributed computing project that fights diseases such as COVID-19 and cancer, has hit an all-time high in popularity. I’m stunned to find that my blog is now getting more views every day than it did every month last year. With that said, this is a perfect opportunity to reach out and see if all the new donors are interested in tuning their computers for efficiency, to save a little on power, lighten the burden on your wallet, and hopefully produce nearly the same amount of science. If this sounds interesting to you, let me know in the comments below!

In my last post, I noted that the latest generation of graphics cards are starting to push the limits of what my primary GPU Folding@Home benchmark rig can do. That computer is based on an 11-year-old chipset (AMD 880), and only supports PCI-Express 2.0. In order for me to keep testing modern fast graphics cards in Windows 10, I wanted to make sure that PCI-Express slot bandwidth wasn’t going to artificially bottleneck me.

So, without further ado, let me present the new, re-built Folding@Home rig, SAGITTA:

Sagitta Desktop

I’ve (re)created a monster!

This build leverages the Raidmax Sagitta case that I’ve had since 2006. This machine has hosted multiple builds (Pentium D 805, Core 2 Duo e8600, Core 2 Quad Q6600, Phenom II X6 1100T, and the most recent FX-8320e Bulldozer). There have been too many graphics cards to count, but the latest one (Nvidia GTX 1650 by Zotac) was carried over for some continuity testing. The case fans and power supply (initially) were also the same since the previous FX build (they aren’t the same ones from back in 2006…those got loud and died long ago). I also kept my Blu-Ray drive and 3.5 inch card reader. That’s where the similarities end. Here is a specs comparison:

Sagitta Rebuild Benchmark Machine Specs

  • Note I ended up updating the power supply to the one shown in the table. More on that below…

System Power Consumption

Initially, the power consumption at idle of the new Ryzen 9 build, measured with my P3 Kill A Watt Meter, was 86 watts. The power consumption while running GPU Folding was 170 watts (and the all-core CPU folding was over 250 watts, but that’s another article entirely).

Using the same Nvidia GeForce GTX 1650 graphics card, these idle and GPU folding power numbers were unfortunately higher than the old benchmark machine, which came in at 70 watts idle and 145 watts load. This is likely due to the overkill hardware that I put into the new rig (X570 motherboards alone are known to draw twice the power of a more normal board). The system’s power consumption difference of 25 watts while folding was especially problematic for my efficiency testing, since new plots compared to graphics cards tested on the old benchmark machine would not be comparable.

To solve this, I could either:

A: Use a 25 watt offset to scale the new GPU F@H efficiency plots

B: Do nothing and just have less accurate efficiency comparisons to previous tests

C: Reduce the power consumption of the new build so that it matches the old one

This being a blog about energy efficiency, I decided to go with Option C, since that’s the one that actually helps the environment. Lets see if we can trim the fat off of this beast of a computer!

Efficiency Boost #1: Power Supply Upgrade

The first thing I tried was to upgrade the power supply. As noted here, the power supply’s efficiency rating is a great place to start when building an energy efficient machine. My old Seasonic X-650 is a very good power supply, and caries an 80+ Gold rating. Still, things have come a long way, and switching to an 80+ Titanium PSU can gain a few efficiency percentage points, especially at low loads.

80+ Table

80+ Efficiency Table

With that 3-5% efficiency boost in mind, I picked up a new Seasonic 750 Watt Prime 80+ Titanium modular power supply. At $200, this PSU isn’t cheap, but it provides a noticeable efficiency improvement at both idle and load. Other nice features were the additional 100 watts of capacity, and the fact that it supported my new motherboard’s dual pin (8 + 4) CPU aux power connection. That extra 4-pin isn’t required to make the X570 board work, but it does allow for more overclocking headroom.

Disclaimer: Before we get into it, I should note that these power readings are “eyeball” readings, taken by glancing at the watt meter and trying to judge the average usage. The actual number jumps around a bit (even at idle) as the computer executes various background tasks. I’d say the measurement precision on any eyeball watt meter readings is +/- 5 watts, so take the below with a grain of salt. These are very small efficiency improvements that are difficult to measure, and your mileage may vary. 

After upgrading the power supply, idle power dropped an impressive 10 watts, from 86 watts to 76. This is an awesome 11% efficiency improvement. This might be due to the new 80+ Titanium power supply having an efficiency target at very low loads (90% efficiency at 10% load), whereas the old 80+ Gold spec did not have a low load efficiency requirement. Thus, even though I used a large 750 watt power supply, the machine can still remain relatively efficient at idle.

Under moderate load (GPU folding), the new 80+ titanium PSU provided a 4% efficiency improvement, dropping the power consumption from 170 watts to 163. This is more in line with expectations.

Efficiency Boost #2: Processor Underclock / Undervolt

Thanks to video gaming mentality, enthusiast-grade desktop processors and motherboards are tuned out of the box for performance. We’re talking about blistering fast, competition-crushing benchmark scores. For most computing tasks (such as running Folding@Home on a graphics card), this aggressive CPU behavior is wasting electricity while offering no discernible performance benefit. Despite what my kid’s shirt says, we need to reel these power hungry CPUs in for maximum GPU folding efficiency.

Never Slow Down

Kai Says: Never Slow Down

One way to improve processor efficiency is to reduce the clock rate and associated voltage. I’d previously investigated this here. It takes exponentially more voltage to support high frequencies, so just by dropping the clock rate by 100 MHz or so, you can lower the voltage a bunch and save on power.

With the advent of processors that up-clock and up-volt themselves (as well as going in the other direction), manual tuning can be a bit more difficult. It’s far easier to first try the automatic settings, to see if some efficiency can be gained.

But wait, this is a GPU folding benchmark rig? Why does the CPU’s frequency and power settings matter?

For GPU folding with an Nvidia graphics card, one CPU core is fully loaded per GPU slot in order to “feed” the card. This is because Nvidia’s implementation of open CL support using a polling (checking) method. In order to keep the graphics card chugging along, the CPU constantly checks on the GPU to see if it needs any data. This polling loop is not efficient and burns unnecessary power. You can read more about it here: https://foldingforum.org/viewtopic.php?f=80&t=34023. In contrast, AMD’s method (interrupts) is a much more graceful implementation that doesn’t lock up a CPU core.

The constant polling loop drives modern gaming-oriented processors to clock up their cores unnecessarily. For the most part, the GPU does not need work at every waking moment. To save power, we can turn down the frequency, so that the CPU is not constantly knocking on the GPU’s metaphorical door.

To do this, I disabled AMD’s Core Performance Boost (CPB) in the AMD Overclocking section of the BIOS (same thing as Intel’s Turbo Boost). This caps the processor speed at the base maximum clock rate (3.5 GHz for the Ryzen 9 3950x), and also eliminates any high voltage values required to support the boost clocks.

Success! GPU folding total system power consumption is now much lower. With less superfluous power draw from the CPU, the wattage is much more comparable to the old Bulldozer rig.

Ryzen 9 3950x Power Reduction Table

It is interesting that idle power consumption came down as well. That wasn’t expected. When the computer isn’t doing anything, the CPU cores should be down-clocked / slept out. Perhaps my machine was doing something in the background during the earlier tests, thus throwing the results off. More investigation is needed.

GPU Benchmark Consistency Check

I fired up GPU folding on the Nvidia GeForce GTX 1650, a card that I have performance data for from my previous benchmark desktop. After monitoring it for a week, the Folding@Home Points Per Day performance was so similar to the previous results that I ended up using the same value (310K PPD) as the official estimate for the 1650’s production. This shows that the old benchmark rig was not a bottleneck for a budget card like the GeForce GTX 1650.

Using the updated system power consumption of nominally 140 watts (vs 145 watts of the previous benchmark machine), the efficiency plots (PPD/Watt) come out very nearly the same. I typically consider power measurements of + / – 5 watts to be within the measurement accuracy of my eyeball on the watt meter anyway, due to normal variations as the system runs. The good news is that even with this variation, it doesn’t change the conclusion of the figure (in terms of graphics card efficiency ranking).

GTX 1650 Efficiency on Ryzen 9

* Benchmark performed on updated Ryzen 9 build

Conclusion

I have a new 16-core beast of a benchmark machine. This computer wasn’t built exclusively for efficiency, but after a few tweaks, I was able to improve energy efficiency at low CPU loads (such as Windows Idle + GPU Folding).

For most of the graphics cards I have tested so far, the massive upgrade in system hardware will not likely affect performance or efficiency results. Very fast cards, such as the 1080 Ti, might benefit from the new benchmark rig’s faster hardware, especially that PCI-Express 4.0 x16 graphics card slot. Most importantly, future tests of blistering fast graphics cards (2080 Ti, 3080 Ti, etc) will probably not be limited by the benchmark machine’s background hardware.

Oh, I can also now encode my backup copies of my blu-ray movies at 40 fps in H.265 in Handbrake (old speed was 6.5 fps on the FX-8320e). That’s a nice bonus too.

Efficiency Note (for GPU Folding@Home Users)

Disabling the automatic processor frequency and voltage scaling (Turbo Boost / Core Performance Boost) didn’t have any effect on the PPD being generated by the graphics card. This makes sense; even relatively slow 2.0 GHz CPU cores are still fast enough to feed most GPUs, and my modern Ryzen 9 at 3.5 GHz is no bottleneck for feeding the 1650. By disabling CPB, I shaved 23 watts off of the system’s power consumption for literally no performance impact while running GPU folding. This is a 16 percent boost in PPD/Watt efficiency, for free!

This also dropped CPU temps from 70 degrees C to 55, and resulted in a lower CPU cooler fan speed / quieter machine. This should promote longevity of the hardware, and reduce how much my computer fights my air conditioning in the summer, thus having a compounding positive effect on my monthly electric bill.

Future Articles

  • Re-Test the 1080 Ti to see if a fast graphics card makes better use of the faster PCI-Express bus on the AM4 build
  • Investigate CPU folding efficiency on the Ryzen 9 3950x

 

Shout out to the helpers…Kai and Sam

Folding@Home on Turing (NVidia GTX 1660 Super and GTX 1650 Combined Review)

Hey everyone. Sorry for the long delay (I have been working on another writing project, more on that later…). Recently I got a pair of new graphics cards based on Nvidia’s new Turing architecture. This has been advertised as being more efficient than the outgoing Pascal architecture, and is the basis of the popular RTX series Geforce cards (2060, 2070, 2080, etc). It’s time to see how well they do some charitable computing, running the now world-famous disease research distributed computing project Folding@Home.

Since those RTX cards with their ray-tracing cores (which does nothing for Folding) are so expensive, I opted to start testing with two lower-end models: the GeForce GTX 1660 Super and the GeForce GTX 1650.

 

These are really tiny cards, and should be perfect for some low-power consumption summertime folding. Also, today is the first time I’ve tested anything from Zotac (the 1650). The 1660 super is from EVGA.

GPU Specifications

Here’s a quick table I threw together comparing these latest Turing-based GTX 16xx series cards to the older Pascal lineup.

Turing GPU Specs

It should be immediately apparent that these are very low power cards. The GTX 1650 has a design power of only 75 watts, and doesn’t even need a supplemental PCI-Express power cable. The GTX 1660 Super also has a very low power rating at 125 Watts. Due to their small size and power requirements, these cards are good options for small form factor PCs with non-gaming oriented power supplies.

Test Setup

Testing was done in Windows 10 using Folding@Home Client version 7.5.1. The Nvidia Graphics Card driver version was 445.87. All power measurements were made at the wall (measuring total system power consumption) with my trusty P3 Kill-A-Watt Power Meter. Performance numbers in terms of Points Per Day (PPD) were estimated from the client during individual work units. This is a departure from my normal PPD metric (averaging the time-history results reported by Folding@Home’s servers), but was necessary due to the recent lack of work units caused by the surge in F@H users due to COVID-19.

Note: This will likely be the last test I do with my aging AMD FX-8320e based desktop, since the motherboard only supports PCI Express 2.0. That is not a problem for the cards tested here, but Folding@Home on very fast modern cards (such as the GTX 2080 Ti) shows a modest slowdown if the cards are limited by PCI Express 2.0 x16 (around 10%). Thus, in the next article, expect to see a new benchmark machine!

System Specs:

  • CPU: AMD FX-8320e
  • Mainboard : Gigabyte GA-880GMA-USB3
  • GPU: EVGA 1080 Ti (Reference Design)
  • Ram: 16 GB DDR3L (low voltage)
  • Power Supply: Seasonic X-650 80+ Gold
  • Drives: 1x SSD, 2 x 7200 RPM HDDs, Blu-Ray Burner
  • Fans: 1x CPU, 2 x 120 mm intake, 1 x 120 mm exhaust, 1 x 80 mm exhaust
  • OS: Win10 64 bit

Goal of the Testing

For those of you who have been following along, you know that the point of this blog is to determine not only which hardware configurations can fight the most cancer (or coronavirus), but to determine how to do the most science with the least amount of electrical power. This is important. Just because we have all these diseases (and computers to combat them with) doesn’t mean we should kill the planet by sucking down untold gigawatts of electricity.

To that end, I will be reporting the following:

Net Worth of Science Performed: Points Per Day (PPD)

System Power Consumption (Watts)

Folding Efficiency (PPD/Watt)

As a side-note, I used MSI afterburner to reduce the GPU Power Limit of the GTX 1660 Super and GTX 1650 to the minimum allowed by the driver / board vendor (in this case, 56% for the 1660 and 50% for the 1650). This is because my previous testing, plus the results of various people in the Folding@Home forums and all over, have shown that by reducing the power cap on the card, you can get an efficiency boost. Let’s see if that holds true for the Turing architecture!

Performance

The following plots show the two new Turing architecture cards relative to everything else I have tested. As can be seen, these little cards punch well above their weight class, with the GTX 1660 Super and GTX 1650 giving the 1070 Ti and 1060 a run for their money. Also, the power throttling applied to the cards did reduce raw PPD, but not by too much.

Nvidia GTX 1650 and 1660 performance

Power Draw

This is the plot where I was most impressed. In the summer, any Folding@Home I do directly competes with the air conditioning. Running big graphics cards, like the 1080 Ti, causes not only my power bill to go crazy due to my computer, but also due to the increased air conditioning required.

Thus, for people in hot climates, extra consideration should be given to the overall power consumption of your Folding@Home computer. With the GTX 1660 running in reduced power mode, I was able to get a total system power consumption of just over 150 watts while still making over 500K PPD! That’s not half bad. On the super low power end, I was able to beat the GTX 1050’s power consumption level…getting my beastly FX-8320e 8-core rig to draw 125 watts total while folding was quite a feat. The best thing was that it still made almost 300K PPD, which is well above last generations small cards.

Nvidia GTX 1650 and 1660 Power Consumption

Efficiency

This is my favorite part. How do these low-power Turing cards do on the efficiency scale? This is simply looking at how many PPD you can get per watt of power draw at the wall.

Nvidia GTX 1650 and 1660 Efficiency

And…wow! Just wow. For about $220 new, you can pick up a GTX 1660 Super and be just as efficient than the previous generation’s top card (GTX 1080 Ti), which still goes for $400-500 used on eBay. Sure the 1660 Super won’t be as good of a gaming card, and it  makes only about 2/3’s the PPD as the 1080 Ti, but on an energy efficiency metric it holds its own.

The GTX 1650 did pretty good as well, coming in somewhere towards the middle of the pack. It is still much more efficient than the similar market segment cards of the previous generation (GTX 1050), but it is overall hampered by not being able to return work units as quickly to the scientists, who prioritize fast work with bonus points (Quick Return Bonus).

Conclusion

NVIDIA’s entry-level Turing architecture graphics cards perform very well in Folding@Home, both from a performance and an efficiency standpoint. They offer significant gains relative to legacy cards, and can be a good option for a budget Folding@Home build.

Join My Team!

Interested in fighting COVID-19, Cancer, Alzheimer’s, Parkinson’s, and many other diseases with your computer? Please consider downloading Folding@Home and joining Team Nuclear Wessels (54345). See my tutorial here.

Interested in Buying a GTX 1660 or GTX 1650?

Please consider supporting my blog by using one of the below Amazon affiliate search links to find your next card! It won’t cost you anything extra, but will provide me with a small part of Amazon’s profit so I can keep paying for this site.

GTX 1660 Amazon Search Affiliate Link!

GTX 1650 Amazon Search Affiliate Link!

Folding@Home Review: NVIDIA GeForce GTX 1080 Ti

Released in March 2017, Nvidia’s GeForce GTX 1080 Ti was the top-tier card of the Pascal line-up. This is the graphics card that super-nerds and gamers drooled over. With an MSRP of $699 for the base model, board partners such as EVGA, Asus, Gigabyte, MSI, and Zotac (among others) all quickly jumped on board (pun intended) with custom designs costing well over the MSRP, as well as their own takes on the reference design.

GTX 1080 Ti Reference EVGA

EVGA GeForce GTX 1080 Ti – Reference

Three years later, with the release of the RTX 2080 Ti, the 1080 Ti still holds its own, and still commands well over $400 on the used market. These are beastly cards, capable of running most games with max settings in 4K resolutions.

But, how does it fold?

Folding@Home

Folding at home is a distributed computing project originally developed by Stanford University, where everyday users can lend their PC’s computational horsepower to help disease researchers understand and fight things like cancer, Alzheimer’s, and most recently the COVID-19 Coronavirus. User’s computers solve molecular dynamics problems in the background, which help the Folding@Home Consortium understand how proteins “misfold” to cause disease. For computer nerds, this is an awesome way to give (money–>electricity–>computer work–>fighting disease).

Folding at home (or F@H) can be run on both CPUs and GPUs. CPUs provide a good baseline of performance, and certain molecular simulations can only be done here. However, GPUs, with their massively parallel shader cores, can do certain types of single-precision math much faster than CPUs. GPUs provide the majority of the computational performance of F@H.

Geforce GTX 1080 Ti Specs

The 1080 Ti is at the top of Nvidia’s lineup of their 10-series cards.

1080 Ti Specs

With 3584 CUDA Cores, the 1080 Ti is an absolute beast. In benchmarks, it holds its own against the much newer RTX cards, besting even the RTX 2080 and matching the RTX 2080 Super. Only the RTX 2080 Ti is decidedly faster.

Folding@Home Testing

Testing is performed in my old but trusty benchmark machine, running Windows 10 Pro and using Stanford’s V7 Client. The Nvidia graphics driver version was 441.87. Power consumption measurements are taken on the system-level using a P3 Watt Meter at the wall.

System Specs:

  • CPU: AMD FX-8320e
  • Mainboard : Gigabyte GA-880GMA-USB3
  • GPU: EVGA 1080 Ti (Reference Design)
  • Ram: 16 GB DDR3L (low voltage)
  • Power Supply: Seasonic X-650 80+ Gold
  • Drives: 1x SSD, 2 x 7200 RPM HDDs, Blu-Ray Burner
  • Fans: 1x CPU, 2 x 120 mm intake, 1 x 120 mm exhaust, 1 x 80 mm exhaust
  • OS: Win10 64 bit

I did extensive testing of the 1080 Ti over many weeks. Folding@Home rewards donors with “Points” for their contributions, based on how much science is done and how quickly it is returned. A typical performance metric is “Points per Day” (PPD). Here, I have averaged my Points Per Day results out over many work units to provide a consistent number. Note that any given work unit can produce more or less PPD than the average, with variation of 10% being very common. For example, here are five screen shots of the client, showing five different instantaneous PPD values for the 1080 Ti.

 

GTX 1080 Ti Folding@Home Performance

The following plot shows just how fast the 1080 Ti is compared to other graphics cards I have tested. As you can see, with nearly 1.1 Million PPD, this card does a lot of science.

1080 Ti Folding Performance

GTX 1080 Ti Power Consumption

With a board power rating of 250 Watts, this is a power hungry graphics card. Thus, it isn’t surprising to see that power consumption is at the top of the pack.

1080 Ti Folding Power

GTX 1080 Ti Efficiency

Power consumption alone isn’t the whole story. Being a blog about doing the most work possible for the least amount of power, I am all about finding Folding@Home hardware that is highly efficient. Here, efficiency is defined as Performance Out / Power In. So, for F@H, it is PPD/Watt. The best F@H hardware is gear that maximizes disease research (performance) done per watt of power consumed.

Here’s the efficiency plot.

1080 Ti Folding Efficiency

Conclusion

The Geforce GTX 1080 Ti is the fastest and most efficient graphics card that I’ve tested so far for Stanford’s Folding@Home distributed computing project. With a raw performance of nearly 1.1 Million PPD in windows and an efficiency of almost 3500 PPD/Watt, this card is a good choice for doing science effectively.

Stay tuned to see how Nvidia’s latest Turing architecture stacks up.

GTX 460 Graphics Card Review: Is Folding on Ancient Hardware Worth It?

Recently, I picked up an old Core 2 duo build on Ebay for $25 + shipping. It was missing some pieces (Graphics card, drives, etc), but it was a good deal, especially for the all-metal Antec P182 case and included Corsair PSU + Antec 3-speed case fans. So, I figured what the heck, let’s see if this vintage rig can fold!

Antec 775 Purchase

To complement this old Socket 775 build, I picked up a well loved EVGA GeForce GTX 460 on eBay for a grand total of $26.85. It should be noted that this generation of Nvidia graphics cards (based on the Fermi architecture from back in 2010) is the oldest GPU hardware that is still supported by Stanford. It will be interesting to see how much science one of these old cards can do.

GTX 460 Purchase

I supplied a dusty Western Digital 640 Black Hard Drive that I had kicking around, along with a TP Link USB wireless adapter (about $7 on Amazon). The Operating System was free (go Linux!). So, for under $100 I had this setup:

  • Case: Antec P182 Steel ATX
  • PSU: Corsair HX 520
  • Processor: Intel Core2duo E8300
  • Motherboard: EVGA nForce 680i SLI
  • Ram: 2 x 2 GB DDR2 6400 (800 MHz)
  • HDD: Western Digital Black 640GB
  • GPU: EVGA GeForce GTX 460
  • Operating System: Ubuntu Linux 18.04
  • Folding@Home Client: V7

I fired up folding, and after some fiddling I got it running nice and stable. The first thing I noticed was that the power draw was higher than I had expected. Measured at the wall, this vintage folding rig was consuming a whopping 220 Watts! That’s a good deal more than the 185 watts that my main computer draws when folding on a modern GTX 1060. Some of this is due to differences in hardware configuration between the two boxes, but one thing to note is that the older GTX 460 has a TDP of 160 watts, whereas the GTX 1060 has a TDP of only 120 Watts.

Here’s a quick comparison of the GTX 460 vs the GTX 1060. At the time of their release, both of these cards were Nvidia’s baseline GTX model, offering serious gaming performance for a better price than the more aggressive GTX -70 and -80-series variants. I threw a GTX 1080 into the table for good measure.

GTX 460 Spec Comparison

GTX 460 Specification Comparison

The key takeaways here are that six years later, the equivalent graphics card to the GTX 460 was over three and a half times faster while using forty watts less power.

Power Consumption

I typically don’t report power consumption directly, because I’m more interested in optimizing efficiency (doing more work for less power). However, in this case, there is an interesting point to be made by looking at the wattage numbers directly. Namely, the GTX 460 (a mid-range card) uses almost as much power as a modern high-end GTX 1080, and uses seriously more power than the modern GTX 1060 mid-range card. Note: these power consumption numbers must be taken with a grain of salt, because the GTX 460 was installed in a different host system (the Core2 Duo rig) as the other cards, but the resutls are still telling. This is also consistent with the advertised TDP of the GTX 460, which is 40 watts higher than the GTX 1060.

GTX 460 Power Consumption (Wall)

Total System Power Consumption

Folding@Home Results

Folding on the old GTX 460 produced a rough average of 20,000 points per day, with the normal +/- 10% variation in production seen between work units. Back in 2006 when I was making a few hundred PPD on an old Athlon 64 X2 CPU, this would have been a huge amount of points! Nowadays, this is not so impressive. As I mentioned before, the power consumption at the wall for this system was 220 Watts. This yields an efficiency of 20,000 PPD / 220 Watts = 90 PPD/Watt.

Based off the relative performance, one would think the six-year newer GTX 1060 would produce somewhere between 3 and 4 times as many PPD as the older 460 card. This would mean roughly 60-80K PPD. However, my GTX 1060 frequently produces over 300K PPD. This is due to Stanford’s Quick Return Bonus, which essentially rewards donors for doing science quickly. You can read more about this incentive-based points system at Stanford’s website. The gist is, the faster you return a work unit to the scientists, the sooner they can get to developing cures for diseases. Thus, they award you more points for fast work. As the performance plot below shows, this quick return bonus really adds up, so that someone doing 3-4 times more (GTX 1060 vs. GTX 460 linear benchmark performance) results in 15 times more F@H performance.

GTX 460 Performance and Efficiency

Old vs. New Graphics Card Comparison: Folding@Home Efficiency and PPD

This being a blog about energy-conscious computing, I’d be remiss if I didn’t point out just how inefficient the ancient GTX 460 is compared to the newer cards. Due to the relatively high power consumption for a midrange card, the GTX 460 is eighteen times less efficient than the GTX 1060, and a whopping thirty three times less efficient than the GTX 1080.

Conclusion

Stanford eventually drops support for old hardware (anyone remember PS3 folding?), and it might not be long before they do the same for Fermi-based GPUs. Compared with relatively modern GPUs, the GTX 460 just doesn’t stack up in 2020. Now that the 10-series cards are almost four years old, you can often get GTX 1060s for less than $200 on eBay, so if you can afford to build a folding rig around one of these cards, it will be 18 times more efficient and make 15 times more points.

Still, I only paid about $100 total to build this vintage folding@home rig for this experiment. One could argue that putting old hardware to use like this keeps it out of landfills and still does some good work. Additionally, if you ignore bonus points and look at pure science done, the GTX 460 is “only” about 4 times slower than its modern equivalent.

Ultimately, for the sake of the environment, I can’t recommend folding on graphics cards that are many years out of date, unless you plan on using the machine as a space heater to offset heating costs in the winter. More on that later…

Addendum

Since doing the initial testing and outline for this article, I picked up a GTX 480 and a few GTX 980 Ti cards. Here are some updated plots showing these cards added to the mix. The GTX 480 was tested in the Core2 build, and the GTX 980 Ti in my standard benchmark rig (AMD FX-based Socket AM3 system).

Various GPU Power Consumption

GTX 980 and 480 Performance

GTX 980 and 480 Efficiency

I think the conclusion holds: even though the GTX 480 is slightly faster and more efficient than it’s little brother, it is still leaps and bounds worse than the more modern cards. The 980 Ti, being a top-tier card from a few generations back, holds its own nicely, and is almost as efficient as a GTX 1060. I’d say that the 980 Ti is still a relatively efficient card to use in 2020 if you can get one for cheap enough.

AMD Radeon RX 580 8GB Folding@Home Review

Hello again.

Today, I’ll be reviewing the AMD Radeon RX 580 graphics card in terms of its computational performance and power efficiency for Stanford University’s Folding@Home project. For those that don’t know, Folding@Home lets users donate their computer’s computations to support disease research. This consumes electrical power, and the point of my blog is to look at how much scientific work (Points Per Day or PPD) can be computed for the least amount of electrical power consumption. Why? Because in trying to save ourselves from things like cancer, we shouldn’t needlessly pollute the Earth. Also, electricity is expensive!

The Card

AMD released the RX 580 in April 2017 with an MSRP of $229. This is an updated card based on the Polaris architecture. I previously reviewed the RX 480 (also Polaris) here, for those interested. I picked up my MSI-flavored RX 580 in 2019 on eBay for about $120, which is a pretty nice depreciated value. Those who have been following along know that I prefer to buy used video cards that are 2-3 years old, because of the significant initial cost savings, and the fact that I can often sell them for the same as I paid after running Folding for a while.

RX_580

MSI Radeon RX 580

I ran into an interesting problem installing this card, in that at 11 inches long, it was about a half inch too long for my old Raidmax Sagitta gaming case. The solution was to take the fan shroud off, since it was the part that was sticking out ever so slightly. This involved an annoying amount of disassembly, since the fans actually needed to be removed from the heat sink for the plastic shroud to come off. Reattaching the fans was a pain (you need a teeny screw driver that can fit between the fan blade gaps to get the screws by the hub).

RX_580_noShroud

RX 580 with Fan Shroud Removed. Look at those heat pipes! This card has a 185 Watt TDP (Board Power Rating). 

RX_580_Installed

RX 580 Installed (note the masking tape used to keep the little side LED light plate off of the fan)

RX_580_tightFit

Now That’s a Tight Fit (the PCI Express Power Plug on the video card is right up against the case’s hard drive bays)

The Test Setup

Testing was done on my rather aged, yet still able, AMD FX-based system using Stanford’s Folding@Home V7 client. Since this is an AMD graphics card, I made sure to switch the video card mode to “compute” within the driver panel. This optimizes things for Folding@home’s workload (as opposed to games).

Test Setup Specs

  • Case: Raidmax Sagitta
  • CPU: AMD FX-8320e
  • Mainboard : Gigabyte GA-880GMA-USB3
  • GPU: MSI Radeon RX 580 8GB
  • Ram: 16 GB DDR3L (low voltage)
  • Power Supply: Seasonic X-650 80+ Gold
  • Drives: 1x SSD, 2 x 7200 RPM HDDs, Blu-Ray Burner
  • Fans: 1x CPU, 2 x 120 mm intake, 1 x 120 mm exhaust, 1 x 80 mm exhaust
  • OS: Win10 64 bit
  • Video Card Driver Version: 19.10.1

 

Performance and Power

I ran the RX 580 through its paces for about a week in order to get a good feel for a variety of work units. In general, the card produced as high as 425,000 points per day (PPD), as reported by Stanford’s servers. The average was closer to 375K PPD, so I used that number as my final value for uninterrupted folding. Note that during my testing, I occasionally used the machine for other tasks, so you can see the drops in production on those days.

RX 580 Client

Example of Client View – RX 580

RX580 History

RX 580 Performance – About 375K PPD

I measured total system power consumption at the wall using my P3 Watt Meter. The system averaged about 250 watts. That’s on the higher end of power consumption, but then again this is a big card.

Comparison Plots

RX 580 Performance

AMD Radeon RX 580 Folding@Home Performance Comparison

RX 580 Efficiency

AMD Radeon RX 580 Folding@Home Efficiency Comparison

Conclusion

For $120 used on eBay, I was pretty happy with the RX 580’s performance. When it was released, it was directly competing with Nvidia’s GTX 1060. All the gaming reviews I read showed that Team Red was indeed able to beat Team Green, with the RX 580 scoring 5-10% faster than the 1060 in most games. The same is true for Folding@Home performance.

However, that is not the end of the story. Where the Nvidia GTX 1060 has a 120 Watt TDP (Thermal Dissipated Power), AMD’s RX 580 needs 185 Watts. It is a hungry card, and that shows up in the efficiency plots, which take the raw PPD (performance) and divide out the power consumption in watts I am measuring at the wall. Here, the RX 580 falls a bit short, although it is still a healthy improvement over the previous generation RX 480.

Thus, if you care about CO2 emissions and the cost of your folding habits on your wallet, I am forced to recommend the GTX 1060 over the RX 580, especially because you can get one used on eBay for about the same price. However, if you can get a good deal on an RX 580 (say, for $80 or less), it would be a good investment until more efficient cards show up on the used market.

Folding@Home: Nvidia GTX 1080 Review Part 3: Memory Speed

In the last article, I investigated how the power limit setting on an Nvidia Geforce GTX 1080 graphics card could affect the card’s performance and efficiency for doing charitable disease research in the Folding@Home distributed computing project. The conclusion was that a power limit of 60% offers only a slight reduction in raw performance (Points Per Day), but a large boost in energy efficiency (PPD/Watt). Two articles ago, I looked at the effect of GPU core clock. In this article, I’m experimenting with a different variable. Namely, the memory clock rate.

The effect of memory clock rate on video games is well defined. Gamers looking for the highest frame rates typically overclock both their graphics GPU and Memory speeds, and see benefits from both. For computation projects like Stanford University’s Folding@Home, the results aren’t as clear. I’ve seen arguments made both ways in the hardware forums. The intent of this article is to simply add another data point, albeit with a bit more scientific rigor.

The Test

To conduct this experiment, I ran the Folding@Home V7 GPU client for a minimum of 3 days continuously on my Windows 10 test computer. Folding@Home points per day (PPD) numbers were taken from Stanford’s Servers via the helpful team at https://folding.extremeoverclocking.com.  I measured total system power consumption at the wall with my P3 Kill A Watt meter. I used the meter’s KWH function to capture the total energy consumed, and divided out by the time the computer was on in order to get an average wattage value (thus eliminating a lot of variability). The test computer specs are as follows:

Test Setup Specs

  • Case: Raidmax Sagitta
  • CPU: AMD FX-8320e
  • Mainboard : Gigabyte GA-880GMA-USB3
  • GPU: Asus GeForce 1080 Turbo
  • Ram: 16 GB DDR3L (low voltage)
  • Power Supply: Seasonic X-650 80+ Gold
  • Drives: 1x SSD, 2 x 7200 RPM HDDs, Blu-Ray Burner
  • Fans: 1x CPU, 2 x 120 mm intake, 1 x 120 mm exhaust, 1 x 80 mm exhaust
  • OS: Win10 64 bit
  • Video Card Driver Version: 372.90

I ran this test with the memory clock rate at the stock clock for the P2 power state (4500 MHz), along with the gaming clock rate of 5000 MHz and a reduced clock rate of 4000 MHz. This gives me three data points of comparison. I left the GPU core clock at +175 MHz (the optimum setting from my first article on the 1080 GTX) and the power limit at 100%, to ensure I had headroom to move the memory clock without affecting the core clock. I verified I wasn’t hitting the power limit in MSI Afterburner.

*Update. Some people may ask why I didn’t go beyond the standard P0 gaming memory clock rate of 5000 MHz (same thing as 10,000 MHz double data rate, which is the card’s advertised memory clock). Basically, I didn’t want to get into the territory where the GDDR5’s error checking comes into play. If you push the memory too hard, there can be errors in the computation but work units can still complete (unlike a GPU core overclock, where work units will fail due to errors). The reason is the built-in error checking on the card memory, which corrects errors as they come up but results in reduced performance. By staying away from 5000+ MHz territory on the memory, I can ensure the relationship between performance and memory clock rate is not affected by memory error correction.

1080 Memory Boost Example

Memory Overclocking Performed in MSI Afterburner

Tabular Results

I put together a table of results in order to show how the averaging was done, and the # of work units backing up my +500 MHz and -500 MHz data points. Having a bunch of work units is key, because there is significant variability in PPD and power consumption numbers between work units. Note that the performance and efficiency numbers for the baseline memory speed (+0 MHz, aka 4500 MHz) come from my extended testing baseline for the 1080 and have even more sample points.

Geforce 1080 PPD Production - Ram Study

Nvidia GTX 1080 Folding@Home Production History: Data shows increased performance with a higher memory speed

Graphic Results

The following graphs show the PPD, Power Consumption, and Efficiency curves as a function of graphics card memory speed. Since I had three points of data, I was able to do a simple three-point-curve linear trendline fit. The R-squared value of the trendline shows how well the data points represent a linear relationship (higher is better, with 1 being ideal). Note that for the power consumption, the card seems to have used more power with a lower memory clock rate than the baseline memory clock. I am not sure why this is…however, the difference is so small that it is likely due to work unit variability or background tasks running on the computer. One could even argue that all of the power consumption results are suspect, since the changes are so small (on the order of 5-10 watts between data points).

Geforce 1080 Performance vs Ram Speed

Geforce 1080 Power vs Ram Speed

Geforce 1080 Efficiency vs Ram Speed

Conclusion

Increasing the memory speed of the Nvidia Geforce GTX 1080 results in a modest increase in PPD and efficiency, and arguably a slight increase in power consumption. The difference between the fastest (+500 MHz) and slowest (-500 MHz) data points I tested are:

PPD: +81K PPD (11.5%)

Power: +9.36 Watts (3.8%)

Efficiency: +212.8 PPD/Watt (7.4%)

Keep in mind that these are for a massive difference in ram speed (5000 MHz vs 4000 MHz).

Another way to look at these results is that underclocking the graphics card ram in hopes of improving efficiency doesn’t work (you’ll actually lose efficiency). I expect this trend will hold true for the rest of the Nvidia Pascal series of cards (GTX 10xx), although so far my testing of this has been limited to this one card, so your mileage may vary. Please post any insights if you have them.

NVIDIA GEFORCE GTX 1080 Folding@Home Review (Part 2)

Welcome back. In the last article, I found that the GeForce GTX 1080 is an excellent graphics card for contributing to Stanford University’s charitable distributed computing project Folding@Home. For Part 2 of the review, I did some extended testing to determine the relationship between the card’s power target and Folding@Home performance & efficiency.

Setting the graphics card’s power target to something less than 100% essentially throttles the card back (lowers the core clock) to reduce power consumption and heat. Performance generally drops off, but computational efficiency (performance/watt of power used) can be a different story, especially for Folding@Home. If the amount of power consumed by the card drops off faster than the card’s performance (measured in Points Per Day for Folding@Home), then the performance can actually go up!

Test Methodology

The test computer and environment was the same as in Part 1. Power measurements were made at the wall with a P3 Kill A Watt meter, using the KWH function to track the total energy used by the computer and then dividing by the recorded uptime to get an average power over the test period. Folding@Home PPD Returns were taken from Stanford’s collection servers.

To gain useful statistics, I set the power limit on the graphics card driver via MSI Afterburner and let the card run for a week at each setting. Averaging the results over many days is needed to reduce the variability seen across work units. For example, I used an average of 47 work units to come up with the performance of 715K PPD for the 80% Power Limit case:

Work Unit Averaging

80% Power Limit: Average PPD Calculation over Six Days

The only outliers I tossed was one day when my production was messed up by thunderstorms (unplug your computers if there is lighting!), plus one of the days at the 60% power setting, where for some reason the card did almost 900K PPD (probably got a string of high value work units). Other than that the data was not massaged.

I tested the card at 100% power target, then at 80%, 70%, 60%, and 50% (90% did not result in any differences vs 100% because folding doesn’t max out the graphics card, so essentially it was folding at around 85% of the card’s power limit even when set to 90% or 100%).

FAH 1080 Power Target Example

Setting the Power Limit in MSI Afterburner

I left the core clock boost setting the same as my final test value from the first part of this review (+175 MHz). Note that this won’t force the card to run at a set faster speed…the power limit constantly being hit causes the core clock to drop. I had to reduce the power limit to 80% to start seeing an effect on the core clock. Further reductions in power limit show further reductions in clock rate, as expected. The approximate relationship between power limit and core clock was this:

Core Clock vs Power Limit

GTX 1080 Core Clock vs. Power Limit

Results

As expected, the card’s raw performance (measured in Points Per Day) drops off as the power target is lowered.

GTX 1080 Performance Part 2

Folding@Home Performance

 

The system power consumption plot is also very interesting. As you can see, I’ve shaved a good amount of power draw off of this build by downclocking the card via the power limit. GTX 1080 Power Consumption

 

By far, the most interesting result is what happens to the efficiency. Basically, I found that efficiency increases (to a point) with decreasing power limit. I got the best system efficiency I’ve ever seen with this card set to 60% power limit (50% power limit essentially produced the same result).

GTX 1080 Efficiency Part 2

Folding@Home Efficiency

Conclusion

For NVIDIA’s Geforce GTX 1080, decreasing a graphic’s card’s power limit can actually improve the efficiency of the card for doing computational computing in Folding@Home. This is similar to what I found when reviewing the 1060. My recommended setting for the 1080 is a power limit of 60%, because that provides a system efficiency of nearly 3500 PPD/Watt and maintains a raw performance of almost 700K PPD.

 

NVIDIA GEFORCE GTX 1080 Folding@Home Review (Part 1)

Intro

It’s hard to believe that the Nvidia GTX 1080 is almost three years old now, and I’m just getting around to writing a Folding@Home review of it. In the realm of graphics cards, this thing is legendary, and only recently displaced from the enthusiast podium by Nvidia’s new RTX series of cards. The 1080 was Nvidia’s top of the line gaming graphics card (next to the Ti edition of course), and has been very popular for both GPU coin mining and cancer-curing (or at least disease research for Stanford University’s charitable distributed computing project: Folding@Home). If you’ve been following along, you know it’s that second thing that I’m interested in. The point of this review is to see just how well the GTX 1080 folds…and by well, I mean not just raw performance, but also energy efficiency.


Quick Stats Comparison

I threw together a quick table to give you an idea of where the GTX 1080 stacks up (I left the newer RTX cards and the older GTX 9-series cards off of here because I’m lazy…

Nvidia Pascal Cards

Nvidia Pascal Family GPU Comparison

As you can see, the GTX 1080 is pretty fast, eclipsed only by the GTX 1080 Ti (which also has a higher Thermal Design Power, suggesting more electricity usage). From my previous articles, we’ve seen that the more powerful cards tend to do work more efficiency, especially if they are in the same TDP bracket. So, the 1080 should be a better folder (both in PPD and PPD/Watt efficiency) than the 1070 Ti I tested last time.

Test Card: ASUS GeForce GTX 1080 Turbo

As with the 1070 Ti, I picked up a pretty boring flavor of a 1080 in the form of an Asus turbo card. These cards lack back plates (which help with circuit board rigidity and heat dissipation) and use cheap blower coolers, which suck in air from a single centrifugal fan on the underside and blow it out the back of the case (keeping the hot air from building up in the case). These are loud, and tend to run hotter than open-fan coolers, so overclocking and boost clocks are limited compared to aftermarket designs. However, like Nvidia’s own Founder’s Edition reference cards, this reference design provides a good baseline for a 1080’s minimum performance.

ASUS GeForce GTX 1080 Turbo

ASUS GeForce GTX 1080 Turbo

The new 1080 looks strikingly similar to the 1070 Ti…Asus is obviously reusing the exact same cooler since both cards have a 180 Watt TDP.

Asus GTX 1080 and 1070 Ti

Asus GTX 1080 and 1070 Ti (which one is which?)

Test Environment

Like most of my previous graphics card testing, I put this into my AMD FX-Based Test System. If you are interested in how this test machine does with CPU folding, you can read about it here. Testing was done using Stanford’s Folding@Home V7 Client (version 7.5.1) in Windows 10. Points Per Day (PPD) production was collected from Stanford’s servers. Power measurements were done with a P3 Kill A Watt Meter (taken at the wall, for a total-system power profile).

Test Setup Specs

  • Case: Raidmax Sagitta
  • CPU: AMD FX-8320e
  • Mainboard : Gigabyte GA-880GMA-USB3
  • GPU: Asus GeForce 1080 Turbo
  • Ram: 16 GB DDR3L (low voltage)
  • Power Supply: Seasonic X-650 80+ Gold
  • Drives: 1x SSD, 2 x 7200 RPM HDDs, Blu-Ray Burner
  • Fans: 1x CPU, 2 x 120 mm intake, 1 x 120 mm exhaust, 1 x 80 mm exhaust
  • OS: Win10 64 bit
  • Video Card Driver Version: 372.90

Video Card Configuration – Optimize for Performance

In my previous articles, I’ve shown how Nvidia GPUs don’t always automatically boost their clock rates when running Folding@home (as opposed to video games or benchmarks). The same is true of the GTX 1080. It sometimes needs a little encouragement in order to fold at the maximum performance. I overclocked the core by 175 MHz and increased the power limit* by 20% in MSI afterburner using similar settings to the GTX 1070. These values were shown to be stable after 2+ weeks of testing with no dropped work units.

*I also experimented with the power limit at 100% and I saw no change in card power consumption. This makes sense…folding is not using 100% of the GPU. Inspection of the MSI afterburner plots shows that while folding, the card does not hit the power limit at either 100% or 120%. I will have to reduce the power limit to get the card to throttle back (this will happen in part 2 of this article).

As with previous cards, I did not push the memory into its performance zone, but left it at the default P2 (low-power) state clock rate. The general consensus is that memory clock does not significantly affect folding@home, and it is better to leave the power headroom for the core clock, which does improve performance. As an interesting side-note, the memory clock on this thing jumps up to 5000 Mhz (effective) in benchmarks. For example, see the card’s auto-boost settings when running Heaven:

1080 Benchmark Stats

Nvidia GeForce GTX 1080 – Boost Clocks (auto) in Heaven Benchmark

Testing Overview

For most of my tests, I just let the computer run folding@home 24/7 for a couple of days and then average the points per day (PPD) results from Stanford’s stats server. Since the GTX 1080 is such a popular card, I decided to let it run a little longer (a few weeks) to get a really good sampling of results, since PPD can vary a lot from work unit to work unit. Before we get into the duration results, let’s do a quick overview of what the Folding@home environment looks like for a typical work unit.

The following is an example screen shot of the display from the client, showing an instantaneous PPD of about 770K, which is very impressive. Here, it is folding on a core 21 work unit (Project 14124).

F@H Client 1080

Folding@Home V7 Client – GeForce GTX 1080

MSI Afterburner is a handy way to monitor GPU stats. As you can see, the GPU usage is hovering in the low 80% region (this is typical for GPU folding in Windows. Linux can use a bit more of the GPU for a few percentage points more PPD). This Asus card, with its reference blower cooler, is running a bit warm (just shy of 70 degrees C), but that’s well within spec. I had the power limit at 120%, but the card is nowhere near hitting that…the power limit seems to just peak above 80% here and there.

GTX 1080 MSI Afterburner

GTX 1080 stats while folding.

Measuring card power consumption with the driver shows that it’s using about 150 watts, which seems about right when compared to the GPU usage and power % graphs. 100% GPU usage would be ideal (and would result in a power consumption of about 180 watts, which is the 1080’s TDP).

In terms of card-level efficiency, this is 770,000 PPD / 150 Watts = 5133 PPD/Watt.

Power Draw (at the card)

Nvidia Geforce GTX 1080 – Instantaneous Power Draw @ the Card

Duration Testing

I ran Folding@Home for quite a while on the 1080. As you can see from this plot (courtesy of https://folding.extremeoverclocking.com/), the 1080 is mildly beating the 1070 Ti. It should be noted that the stats for the 1070 Ti are a bit low in the left-hand side of the plot, because folding was interrupted a few times for various reasons (gaming). The 1080 results were uninterrupted.

1080 Production History

Geforce GTX 1080 Production History

Another thing I noticed was the amount of variation in the results. Normal work unit variation (at least for less powerful cards) is around 10-20 percent. For the GTX 1080, I saw swings of 200K PPD, which is closer to 30%. Check out that one point at 875K PPD!

Average PPD: 730K PPD

I averaged the PPD over two weeks on the GTX 1080 and got 730K PPD. Previous testing on the GTX 1070 Ti (based on continual testing without interruptions) showed an average PPD of 700K. Here is the plot from that article, reproduced for convenience.

Nvidia GTX 1070 Ti Time History

Nvidia GTX 1070 Ti Folding@Home Production Time History

I had expected my GTX 1080 to do a bit better than that. However, it only has about 5% more CUDA cores than the GTX 1070 Ti (2560 vs 2438). The GTX 1080’s faster memory also isn’t an advantage in Folding@Home. So, a 30K PPD improvement for the 1080, which corresponds to about a 4.3% faster, makes sense.

System Average Power Consumption: 240 Watts @ the Wall

I spot checked the power meter (P3 Kill A Watt) many times over the course of folding. Although it varies with work unit, it seemed to most commonly use around 230 watts. Peek observed wattage was 257, and minimum was around 220. This was more variation than I typically see, but I think it corresponds with the variation in PPD I saw in the performance graph. It was very tempting to just say that 230 watts was the number, but I wasn’t confident that this was accurate. There was just too much variation.

In order to get a better number, I reset the Kill-A-Watt meter (I hadn’t reset it in ages) and let it log the computer’s usage over the weekend. The meter keeps track of the total kilowatt-hours (KWH) of energy consumed, as well as the time period (in hours) of the reading. By dividing the energy by time, we get power. Instead of an instantaneous power (the eyeball method), this is an average power over the weekend, and is thus a compatible number with the average PPD.

The end result of this was 17.39 KWH consumed over 72.5 hours. Thus, the average power consumption of the computer is:

17.39/72.5 (KWH/H) * 1000 (Watts/KW) = about 240 Watts (I round a bit for convenience in reporting, but the Excel sheet that backs up all my plots is exact)

This is a bit more power consumed than the GTX 1070 Ti results, which used an average of 225 watts (admittedly computed by the eyeball method over many days, but there was much less variation so I think it is valid). This increased power consumption of the GTX 1080 vs. the 1070 Ti is also consistent with what people have seen in games. This Legit Reviews article shows an EVGA 1080 using about 30 watts more power than an EVGA 1070 Ti during gaming benchmarks. The power consumption figure is reproduced below:

LegitReviews_power-consumption

Modern Graphics Card Power Consumption. Source: Legit Reviews

This is a very interesting result. Even though the 1080 and the 1070 Ti have the same 180 Watt TDP, the 1080 draws more power, both in folding@home and in gaming.

System Computational Efficiency: 3044 PPD/Watt

For my Asus GeForce GTX 1080, the folding@home efficiency is:

730,000 PPD / 240 Watts = 3044 PPD/Watt.

This is an excellent score. Surprisingly, it is slightly less than my Asus 1070 Ti, which I found to have an efficiency of 3126 PPD/Watt. In practice these are so close that it just could be attributed to work unit variation. The GeForce 1080 and 1070 Ti are both extremely efficient cards, and are good choices for folding@home.

Comparison plots here:

GeForce 1080 PPD Comparison

GeForce GTX 1080 Folding@Home PPD Comparison

GeForce 1080 Efficiency Comparison

GeForce GTX 1080 Folding@Home Efficiency Comparison

Final Thoughts

The GTX 1080 is a great card. With that said, I’m a bit annoyed that my GTX 1080 didn’t hit 800K PPD like some folks in the forums say theirs do (I bet a lot of those people getting 800K PPD use Linux, as it is a bit better than Windows for folding). Still, this is a good result.

Similarly, I’m annoyed that the GTX 1080 didn’t thoroughly beat my 1070 Ti in terms of efficiency. The results are so close though that it’s effectively the same. This is part one of a multi-part review, where I tuned the card for performance. In the next article, I plan to go after finding a better efficiency point for running this card by experimenting with reducing the power limit. Right now I’m thinking of running the card at 80% power limit for a week, and then at 60% for another week, and reporting the results. So, stay tuned!