Category Archives: Computer Efficiency

Nvidia GeForce GTX 1070 Ti Folding@Home Review

In an effort to make as much use of the colder months in New England as I can, I’m running tons of Stanford University’s Folding@Home on my computer to do charitable science for disease research while heating my house. In the last article, I reviewed a slightly older AMD card, the RX 480, to determine its performance and efficiency running Folding@Home. Today, I’ll be taking a look at one of the favorite cards from Nvidia for both folding and gaming: The 1070 Ti.

The GeForce GTX 1070 Ti was released in November 2017, and sits between the 1070 and 1080 in terms of raw performance. As of February 2019, the 1070 Ti can be for a deep discount on the used market, now that the RTX 20xx series cards have been released. I got my Asus version on eBay for $250.

Based on Nvidia’s 14nm Pascal architecture, the 1070 Ti has 2432 CUDA cores and 8 GB of GDDR5 memory, with a memory bandwidth of 256 GB/s. The base clock rate of the GPU is 1607 MHz, although the cards automatically boost well past the advertised boost clock of 1683 Mhz. Thermal Design Power (TDP) is 180 Watts.

The 3rd party Asus card I got is nothing special. It appears to be a dual-slot reference design, and uses a blower cooler to exhaust hot air out the back of the case. It requires one supplemental 8-pin PCI-E Power connection.

IMG_20190206_185514342

ASUS GeForce GTX 1070 Ti

One thing I will note about this card is it’s length. At 10.5 inches (which is similar to many NVidia high-end cards), it can be a bit problematic to fit in some cases. I have a Raidmax Sagitta mid-tower case from way back in 2006, and it fits, but barely. I had the same problem with the EVGA GeForce 1070 I reviewed earlier.

IMG_20190206_190210910_TOP

ASUS GTX 1070 Ti – Installed.

Test Environment

Testing was done in Windows 10 on my AMD FX-based system, which is old but holds pretty well, all things considered. You can read more on that here. The system was built for both performance and efficiency, using AMD’s 8320e processor (a bit less power hungry than the other 8-core FX processors), a Seasonic 650 80+ Gold Power Supply, and 8 GB of low voltage DDR3 memory. The real key here, since I take all my power measurements at the wall with a P3 Kill-A-Watt meter, is that the system is the same for all of my tests.

The Folding@Home Client version is 7.5.1, running a single GPU slot with the following settings:

GPU Slot Options

GPU Slot Options for Maximum PPD

These settings tend to result in a slighter higher points per day (PPD), because they request large, advanced work units from Stanford.

Initial Test Results

Initial testing was done on one of the oldest drivers I could find to support the 1070 Ti (driver version 388.13). The thought here was that older drivers would have less gaming optimizations, which tend to hurt performance for compute jobs (unlike AMD, Nvidia doesn’t include a compute mode in their graphics driver settings).

Unfortunately, the best Nvidia driver for the non-Ti GTX 10xx cards (372.90) doesn’t work with the 1070 Ti, because the Ti version came out a few months later than the original cards. So, I was stuck with version 388.13.

Nvidia 1070 TI Baseline Clocks

Nvidia GTX 1070 Ti Monitoring – Baseline Clocks

I ran F@H for three days using the stock clock rate of 1823 MHz core, with the memory at 3802 MHz. Similar to what I found when testing the 1070, Folding@Home does not trigger the card to go into the high power (max performance) P0 state. Instead, it is stuck in the power-saving P2 state, so the core and memory clocks do not boost.

The PPD average for three days when folding at this rate was 632,380 PPD. Checking the Kill-A-Watt meter over the course of those days showed an approximate average system power consumption of 220 watts. Interestingly, this is less power draw than the GTX 1070 (which used 227 watts, although that was with overclocking + the more efficient 372.90 driver). The PPD average was also less than the GTX 1070, which had done about 640,000 PPD. Initial efficiency, in PPD/Watt, was thus 2875 (compared to the GTX 1070’s 2820 PPD/Watt).

The lower power consumption number and lower PPD performance score were a bit surprising, since the GTX 1070 TI has 512 more CUDA cores than the GTX 1070. However, in my previous review of the 1070, I had done a lot of optimization work, both with overclocking and with driver tuning. So, now it was time to do the same to the 1070 Ti.

Tuning the Card

By running UNIGINE’s Heaven video game benchmark in windowed mode, I was able to watch what the card did in MSI afterburner. The core clock boosted up to 1860 MHz (a modest increase from the 1823 base clock), and the memory went up to 4000 MHz (the default). I tried these overclocking settings and saw only a modest increase in PPD numbers. So, I decided to push it further, despite the Asus card having only a reference-style blower cooler. From my 1070 review, I found I was able to fold nice and stable with a core clock of 2012 MHz and a memory clock of 3802 MHz. So, I set up the GTX 1070 Ti with those same settings. After running it for five days, I pushed the core a little higher to 2050 Mhz. A few days later, I upgraded the driver to the latest (417.71).

Nvidia 1070 TI OC

Nvidia GTX 1070 Ti Monitoring – Overclocked

With these settings, I did have to increase the fan speed to keep the card below 70 degrees Celsius. Since the Asus card uses a blower cooler, it was a bit loud, but nothing too crazy. Open-air coolers with lots of heat pipes and multiple fans would probably let me push the card higher, but from what I’d read, people start running into stability problems at core clocks over 2100 Mhz. Since the goal of Folding@home is to produce reliable science to help Stanford University fight disease, I didn’t want to risk dropping a work unit due to an unstable overclock.

Here’s the production vs. time history from Stanford’s servers, courtesy of https://folding.extremeoverclocking.com/

Nvidia GTX 1070 Ti Time History

Nvidia GTX1070 Ti Folding@Home Production Time History

As you can see below, the overclock helped improve the performance of the GTX 1070 Ti. Using the last five days worth of data points (which has the graphics driver set to 417.71 and the 2050 MHz core overclock), I got an average PPD of 703,371 PPD with a power consumption at the wall of 225 Watts. This gives an overall system efficiency of 3126 PPD/Watt.

Finally, these results are starting to make more sense. Now, this card is outpacing the GTX 1070 in terms of both PPD and energy efficiency. However, the gain in performance isn’t enough to confidently say the card is doing better, since there is typically a +/- 10% PPD difference depending on what work unit the computer receives. This is clear from the amount of variability, or “hash”, in the time history plot.

Interestingly, the GTX 1070 Ti it is still using about the same amount of power as the base model GTX 1070, which has a Thermal Design Power of 150 Watts, compared to the GTX 1070 Ti’s TDP of 180 Watts. So, why isn’t my system consuming 30 watts more at the wall than it did when equipped with the base 1070?

I suspect the issue here is that the drivers available for the 1070 Ti are not as good for folding as the 372.90 driver for the non-Ti 10-series Nvidia cards. As you can see from the MSI Afterburner screen shots above, GPU Usage on the GTX 1070 Ti during folding hovers in the 80-90% range, which is lower than the 85-93% range seen when using the non-Ti GTX 1070. In short, folding on the 1070 Ti seems to be a bit handicapped by the drivers available in Windows.

Comparison to Similar Cards

Here are the Production and Efficiency Plots for comparison to other cards I’ve tested.

GTX 1070 Ti Performance Comparison

GTX 1070 Ti Performance Comparison

GTX 1070 Ti Efficiency Comparison

GTX 1070 Ti Efficiency Comparison

Conclusion

The Nvidia GTX 1070 Ti is a very good graphics card for running Folding@Home. With an average PPD of 703K and a system efficiency of 3126 PPD/Watt, it is the fastest and most efficient graphics card I’ve tested so far. As far as maximizing the amount of science done per electricity consumed, this card continues the trend…higher-end video cards are more efficient, despite the increased power draw.

One side note about the GTX 1070 Ti is that the drivers don’t seem as optimized as they could be. This is a known problem for running Folding@Home in Windows. But, since the proven Nvidia driver 372.90 is not available for the Ti-flavor of the 1070, the hit here is more than normal. On the used market in 2019, you can get a GTX 1070 for $200 on ebay, whereas the GTX 1070 Ti’s go for $250. My opinion is that if you’re going to fold in Windows, a tuned GTX 1070 running the 372.90 driver is the way to go.

Future Work

To fully unlock the capability of the GTX 1070 Ti, I realized I’m going to have to switch operating systems. Stay tuned for a follow-up article in Linux.

Advertisements

Folding@Home Efficiency vs. GPU Power Limit

Folding@Home: The Need for Efficiency

Distributed computing projects like Stanford University’s Folding@Home sometimes get a bad rap on account of all the power that is consumed in the name of science.  Critics argue that any potential gains that are made in the area of disease research are offset by the environmental damage caused by thousands of computers sucking down electricity.

This blog hopes to find a balance by optimizing the way the computational research is done. In this article, I’m going to show how a simple setting in the graphics card driver can improve Folding@Home’s Energy Efficiency.

This blog uses an Nvidia graphics card, but the general idea should also work with AMD cards. The specific card here is an EVGA GeForce GTX 1060 (6 GB).  Green F@H Review here: Folding on the NVidia GTX 1060

If you are folding on a CPU, similar efficiency improvements can be achieved by optimizing the clock frequencies and voltages in the BIOS.  For an example on how to do this, see these posts:

F@H Efficiency: AMD Phenom X6 1100T

F@H Efficiency: Overclock or Undervolt?

(at this point in time I really just recommend folding on a GPU for optimum production and efficiency)

GPU Power Limit Overview

The GPU Power limit slider is a quick way to control how much power the graphics card is allowed to draw. Typically, graphics cards are optimized for speed, with efficiency a second goal (if at all). When a graphics card is pushed harder, it will draw more power (until it runs into the power limit). Today’s graphics cards will also boost their clock rate when loaded, and reduce it when the load goes away. Sometimes, a few extra MHz can be achieved for minimal extra power, but go too far and the amount of power needed to drive the card will grow exponentially. Sure the card is doing a bit more work (or playing a game a bit faster), but the heaps of extra power needed to do this are making it very inefficient.

What I’m going to quickly show is that going the other way (reducing power) can actually improve efficiency, albeit at a reduction of raw output. For  this quick test, I’m just going to look a the default power limit, 100%, vs 50%. Specific tuning is going to be dependent on your actual graphics card. But, with a few days at different settings, you should be able to find a happy balance between performance and efficiency.

For these plots, I used my watt meter to obtain actual power consumption at the wall. You can read about my watt meters here.

Changing the Power Limit

A tool such as MSI Afterburner can be used to view the graphics card’s settings, including the power limit. In the below screenshot, I reduced the card’s power limit by 50% midway through taking data. You can clearly see the power consumption and GPU temperature drop. This suggests the entire computer should be drawing less power from the wall. I confirmed this with my watt meter.

Adjust Power Limit MSI Afterburner

MSI Afterburner is used to reduce the graphics card’s power limit.

Effect on Results

I ran the card for multiple days at each power setting and used Stanford’s actual stats to generate an averaged number for PPD. Reporting an average number like this lends more confidence that the results are real, since PPD as reported in the client varies a lot with time, and PPD can bounce around by +/- 10 percent with different projects.

Below is the production time history plot, courtesy of https://folding.extremeoverclocking.com/. I marked on the plot the actual power consumption numbers I was seeing from my computer at the wall. As you can see, reducing the power limit on the 1060 from 100% to 50% saved about 40 watts of power at the wall.

GTX 1060 F@H Reduced Power Limit Production

GTX 1060 Folding@Home Performance at 100% and 50% Power

On the efficiency plot, you can see that reducing the power limit on the 1060 actually improved its efficiency slightly. This is a great way to fold more effectively.

Nvidia 1060 PPD per Watt Updated

NVidia GTX 1060 Folding@Home Efficiency Results

There is a downside of course, and that is in raw production. The Points Per Day plot below shows a pretty big reduction in PPD for the reduced power 1060, although it is still beating its little brother, the 1050 TI. One of the reasons PPD falls off so hard is that Stanford provides bonus points that are tied to how fast your computer can return a work unit. These points increase exponentially the faster your computer can do work. So, by slowing the card down, we not only lose on base points, but we lose on  the quick return bonus as well.

Nvidia 1060 PPD Updated

NVidia GTX 1060 Folding@Home Performance Results

Conclusion

Reducing the power limit on a graphics card can increase its computational energy efficiency in Folding@Home, although at the cost of raw PPD. There is probably a sweet spot for efficiency vs. performance at some power setting between 50% and 100%. This will likely be different for each graphics card. The process outlined above can be used for various power limit settings to find the best efficiency point.

 

Squeezing a few more PPD out of the FX-8320E

In the last post, the 8-core AMD FX-8320E was compared against the AMD Radeon 7970 in terms of both raw Folding@home computational performance and efficiency.  It lost, although it is the best processor I’ve tested so far.  It also turns out it is a very stable processor for overclocking.

Typical CPU overclocking focuses on raw performance only, and involves upping the clock frequency of the chip as well as the supplied voltage.  When tuning for efficiency, doing more work for the same (or less) power is what is desired.  In that frame of mind, I increased the clock rate of my FX-8320e without adjusting the voltage to try and find an improved efficiency point.

Overclocking Results

My FX-8320E proved to be very stable at stock voltage at frequencies up to 3.6 GHz.  By very stable, I mean running Folding@home at max load on all CPUs for over 24 hours with no crashes, while also using the computer for daily tasks.   This is a 400 MHz increase over the stock clock rate of 3.2 GHz.  As expected, F@H production went up a noticeable amount (over 3000 PPD).  Power consumption also increased slightly.  It turns out the efficiency was also slightly higher (190 PPD/watt vs. 185 PPD/watt).  So, overclocking was a success on all fronts.

FX 8320e overclock PPD

FX 8320e overclock efficiency

Folding Stats Table FX-8320e OC

Conclusion

As demonstrated with the AMD FX-8320e, mild overclocking can be a good way to earn more Points Per Day at a similar or greater efficiency than the stock clock rate.  Small tweaks like this to Folding@home systems, if applied everywhere, could result in more disease research being done more efficiently.

F@H Efficiency on Dell Inspiron 1545 Laptop

Laptops!  

When browsing internet forums looking for questions that people ask about F@H, I often see people asking if it is worth folding on laptops (note that I am talking about normal, battery-life optimized laptops, not Alienware gaming laptops / desktop replacements).  In general, the consensus from the community is that folding on laptops is a waste of time.  Well, that is true from a raw performance perspective.  Laptops, tablets, and other mobile devices are not the way to rise to the top of the Folding at Home leader boards.  They’re just too slow, due to the reduced clock speeds and voltages employed to maximize battery life.

But wait, didn’t you say that low voltage is good for efficiency?

I did, in the last article.  By undervolting and slightly underclocking the Phenom II X6 in a desktop computer, I was able to get close to 90 PPD/Watt while still doing an impressive twelve thousand PPD.

However, this raised the interesting question of what would happen if someone tried to fold on a computer that was optimized for low voltage, such as a laptop.  Lets find out!

Dell Inspiron 1545

Specs:

  • Intel T9600 Core 2 Duo
  • 8 GB DDR2 Ram
  • 250 GB spinning disk style HDD (5400 RPM, slow as molasses)
  • Intel Integrated HD Graphics (horrible for gaming, great for not using much extra electricity)
  • LCD Off during test  to reduce power

I did this test on my Dell Inspiron 1545, because it is what I had lying around.  It’s an older laptop that originally shipped with a slow socket P Intel Pentium dual core.  This 2.1 GHz chip was going to be so slow at folding that I decided to splurge and pick up a 2.8 GHz T9600 Core 2 Duo from Ebay for 25 bucks (can you believe this processor used to cost $400)?  This high end laptop processor has the same 35 watt TDP as the Pentium it is replacing, but has 6 times the total cache.  This is a dual core part that is roughly similar in architecture to the Q6600 I tested earlier, so one would expect the PPD and the efficiency to be close to the Q6600 when running on only 2 cores (albeit a bit higher due to the T9600’s higher clock speed).  I didn’t bother doing a test with the old laptop processor, because it would have been pretty bad (same power consumption but much slower).

After upgrading the processor (rather easy on this model of laptop, since there is a rear access panel that lets you get at everything), I ran this test in Windows 7 using the V7 client.  My computer picked up a nice A4 work unit and started munching away.  I made sure to use my passkey to ensure I get the quick return bonus.

Results:

The Intel T9600 laptop processor produced slightly more PPD than the similar Q6600 desktop processor when running on 2 cores (2235 PPD vs 1960 PPD). This is a decent production rate for a dual core, but it pales in comparison to the 6000K PPD of the Q6600 running with all 4 cores, or newer processors such as the AMD 1100T (over 12K PPD).

However, from an efficiency standpoint, the T9600 Core2 Duo blows away the desktop Core2 Quad by a lot, as seen in the chart and graph below.

Intel T9600 Folding@Home Efficiency

Intel T9600 Folding@Home Efficiency

Intel T9600 Folding@Home Efficiency vs. Intel Desktop Processors

Intel T9600 Folding@Home Efficiency vs. Desktop Processors

Conclusion

So, the people who say that laptops are slow are correct.  Compared to all the crazy desktop processors out there, a little dual core in a laptop isn’t going to do very many points per day.  Even modern quad cores laptops are fairly tame compared to their desktop brethren.  However, the efficiency numbers tell a different story.

Because everything from the motherboard, video card, audio circuit, hard drive, and processor are optimized for low voltage, the total system power consumption was only 39 watts (with the lid closed).  This meant that the 2235 PPD was enough to earn an efficiency score of 57.29 PPD/Watt.  This number beats all of the efficiency numbers from the most similar desktop processor tested so far (Q6600), even when the Q6600 is using all four cores.

So, laptops can be efficient F@H computers, even though they are not good at raw PPD production.  It should also be noted that during this experiment the little T9600 processor heated up to a whopping 67 degrees C. That’s really warm compared to the 40 degrees Celsius the Q6600 runs at in the desktop.  Over time, that heat load would probably break my poor laptop and give me an excuse to get that Alienware I’ve been wanting.  

F@H Efficiency: AMD Phenom X6 1100T

Welcome back to the fold!  In the last post, I showed how increasing the # of CPU cores has a massive positive impact on the amount of cancer-fighting research your computer does, as well as how efficiently it does it.  In stock form, the quad core Intel Q6600 delivered just shy of 6000 points per day of F@H with all 4 cores engaged.  My computer’s total power draw at the wall was 169 watts.  So, that works out to be 6000 PPD / 169 Watts = 35 PPD/Watt.  Not too bad, considering the horrible efficiency numbers of the uniprocessor client.

In this article, I’m jumping forward in time to a more modern processor…the AMD Phenom II X6 1000T.  This six-core beast is the last of the true core-for-core chips from AMD (Bulldozer and newer CPUs have 2 integer units but only 1 floating point unit per core).  With 6 physical floating point cores, the AMD 1100T should be good at folding.

Note that I am obviously using a completely different computer setup here than in the last post (I have an AMD machine and an Intel machine).  So, the efficiency numbers aren’t a perfect apples-to-apples comparison, due to the different supporting parts in both computers.  However, the difference between processors is so large that the differences in the host computers really doesn’t matter.  The newer AMD chip is much better, and that is what is driving the results!

Test Rig Specs:

AMD Phenom II X6 1100T
Gigabyte GA-880GMA-USB3 Micro ATX Motherboard
8 GB Kingston ValueRam DDR3 1333 MHz (4 x 2GB)
Seasonic S12 II 380W 80+ PSU
Hitachi 80 G SATA Hard Drive
Linkworld MicroATX
Fans: 2 x 80mm Side Intake, 1 x 80mm front intake, 1 x 92 mm Exhaust
Noctua NH-C12P SE14 140mm SSO CPU Cooler

A note about the operating system…

The previous tests on my Intel Q6600 were performed using Windows 7 with the V7 folding client.  Due to Windows costing money, I used Ubuntu Linux on my AMD system with the V7 folding client.  Linux is a bit more capable of maxing out a PC’s hardware than Windows, so the resulting PPD numbers are likely slightly higher than they would be had the machine been running Windows.  However, the difference is typically small (5 percent or so).  Note that over time, this performance bonus can really add up.  This is why Linux is the preferred operating system for many dedicated Folding at Home users.

AMD Folding Rig - Phenom II X6 Configuration

AMD Folding Rig – Phenom II X6 Configuration

Test Results

AMD Phemom II X6 1100T Folding at Home Performance and Efficiency

AMD Phemom II X6 1100T Folding at Home Performance and Efficiency

AMD 1100T 6-core CPU pushes the efficiency curve further

AMD 1100T 6-core CPU pushes the efficiency curve further

As expected, the 6-core 1100T is a performer when it comes to F@H.  Producing just shy of 13,000 Points Per Day with a total system power draw of 185 watts, this setup has an efficiency of 67 PPD/Watt.  This is almost twice that of the older Intel quad-cores.  Note that I am not Intel-bashing here…if you do some google searching, you will likely see that the new Intel Core I5 and I7’s do even better in both raw PPD and PPD/W than the AMD 1100T.  The moral of the story is that you should try and set up your folding Rig with the most powerful, latest-generation processor you can.  I recommend upgrading at least once a year to keep improving the performance and efficiency of your F@H contributions.  Don’t be that guy running an old-school Athlon X2 generation 300 points per day (while using 150 watts to do it).

Folding at Home CPU Efficiency: Multi-Core Intel Q6600

In the last post, I showed how environmentally unfriendly it is to run just the uniprocessor client.  In this post, I’ll finish off the study about # of CPU cores vs. folding efficiency.  As it turns out, you can virtually double your folding at home efficiency when you double the amount of CPU cores you are running with. Using the same Intel Q6600 as before, I told the Folding at Home client to ramp up and use three cores.  Then, once I had some data, I switched it to four-core folding.  With the CPU fully engaged, my computer became a bit slow to use, but that’s not a problem since what we are all about here is dedicated F@H Rigs (the only way to fold efficiently is to fold 100%).   If I want to use my computer, I’ll stop the folding to do so, then start it up later.

Here are the results of the 1 through 4 core F@H PPD experiment!

Q6600_Efficiency

As you can see, both performance (PPD) and energy efficiency (technically efficacy in PPD/Watt) scale with the # of CPU cores being used.  Yes, the system does use more total electricity when more cores are engaged (169 watts vs. 142), but the amount of work being done per day has far surpassed the slight increase in power consumption.  In graph form:

Intel Q6600 Folding@Home Points Per Day / Watt Graph

Intel Q6600 Folding at Home Efficiency Graph

Intel Q6600 Folding at Home Efficiency Graph

In conclusion, it makes the most sense from a performance and efficiency standpoint to use as much of your CPU as you can.  In the next post, I’ll look at a few more powerful CPU-based folding@home systems.

PPD/Watt Shootout: Uniprocessor Client is a Bad Idea

My Gaming / Folding computer with Q6600 / GTX 460 Installed

My Gaming / Folding computer with Q6600 / GTX 460 Installed

Since the dawn of Folding@Home, Stanford’s single-threaded CPU client known as “uniprocessor” has been the standard choice for stable folding@home installations.  For people who don’t want to tinker with many settings, and for people who don’t plan on running 24/7, this has been a good choice of clients because it allows a small science contribution to be done without very much hassle.  It’s a fairly invisible program that runs in the background and doesn’t spin up all your computer’s fans and heat up your room.  But, is it really efficient?  

The question, more specifically targeted for folding freaks reading this blog, is this:  Does the uniprocessor client make sense for an efficient 24/7 folding@home rig?  My answer:  a resounding NO!  Kill that process immediately!

A basic Google search on this will show that you can get vastly more points per day running the multicore client (SMP), a dedicated graphics card client (GPU), or both.  Just type “PPD Uniprocessor SMP Folding” into Google and read for about 20 minutes and you’ll get the idea.  I’m too lazy to point to any specific threads (no pun intended), but the various forum discussions reveal that the uniprocessor client is slower than slow.  This should not be surprising.  One CPU core is slower than two, which is slower than three!  Yay, math!

Also, Stanford’s point reward system isn’t linear but exponential.  If you return a work unit twice as fast, you get more than twice as many points as a reward, because prompt results are very valuable in the scientific world.  This bonus is known as the Quick Return Bonus, and it is available to users running with a passkey (a long auto-generated password that proves you are who you say you are to Stanford’s servers).  I won’t regurgitate all that info on passkeys and points here, because if you are reading this site then you most likely know it already.  If not, start by downloading Stanford’s latest all-in-one client known as Client V7.  Make sure you set yourself up with a username as well as a passkey, in case you didn’t have one.  Once you return 10 successful work units using your passkey, you can get the extra QRB points.  For the record, this is the setup I am using for this blog at the moment: V7 Client Version 7.3.6, running with passkey.

Unlike the older 6.x client interfaces, the new V7 client lets you pick the specific work package type you want to do within one program.  “Uniprocessor” is no longer a separate installation, but is selectable by adding a CPU slot within the V7 client and telling it how many threads to run.  V7 then downloads the correct work unit to munch on.

I thought I was talking efficiency!  Well, to that end, what we want to do is maximize the F@H output relative to the input.  We want to make as many Points per Day while drawing the fewest watts from the wall as possible.  It should be clear by now where this is going (I hope).  Because Stanford’s points system heavily favors the fast return of work units, it is often the case that the PPD/Watt increases as more and more CPU cores or GPU shaders are engaged, even though the resulting power draw of the computer increases.

Limiting ourselves to CPU-only folding for the moment, let’s have a look at what one of my Folding@Home rigs can do.  It’s Specs Time (Yay SPECS!). Here are the specs of my beloved gaming computer, known as Sagitta (outdated picture was up at the top).

  • Intel Q6600 Quad Core CPU @ 2.4 GHz
  • Gigabyte AMD Radeon HD 7870 Gigahertz Edition
  • 8 GB Kingston DDR2-800 Ram
  • Gigabyte 965-P S3 motherboard
  • Seasonic X-650 80+ Gold PSU
  • 2 x 500 GB Western Digital HDDs RAID-1
  • 2 x 120 MM Intake Fans
  • 1 x 120 MM Exhaust Fan
  • 1 x 80 MM Exhaust Fan
  • Arctic Cooling Freezer 7 CPU Cooler
  • Generic PCI Slot centrifugal exhaust fan
Ancient Pic of Sagitta (2006 Vintage).  I really need to take a new pic of the current configuration.

Ancient Pic of Sagitta (2006 Vintage). I really need to take a new pic of the current configuration.

You’ll probably say right away that this system, except for the graphics card, is pretty out of date for 2014, but for relative A to B comparisons within the V7 client this doesn’t matter.  For new I7 CPUs, the relative performance and efficiency differences seen by increasing the number of CPU cores for Folding reveals the same trend as will be shown here.  I’ll start by just looking at the 1-core option (uniprocessor) vs a dual-core F@H solve.

Uniprocessor Is Slow

As you can see, switching to a 2-CPU solve within the V7 client yields almost twice as many PPD (12.11 vs 6.82).  And, this isn’t even a fair comparison, because the dual-core work unit I received was one of the older A3 cores, which tend to produce less PPD than the A4 work units.

In conclusion, if everyone who is out there running the uniprocessor client switched to a dual-core client, FOLDING AT HOME WOULD BECOME TWICE AS EFFICIENT!  I can’t scream this loud enough.  Part of the reason for this is because it doesn’t take many more watts to feed another core in a computer that is already fired up and folding.  In the above example, we really started getting twice the amount of work done for only 13 more watts of power consumed.  THIS IS AWESOME, and it is just the beginning.  In the next article, I’ll look at the efficiency of 3 and 4 CPU Folding on the Q6600, as well as 6-CPU folding on my other computer, which is powered by a newer processor (AMD Phenom II X6 1100T). I’ll then move on to dual-CPU systems (non BIGADV at this point for those of you who know what this means, but we will get there too), and to graphics cards.  If you think 12 PPD/Watt is good, just wait until you read the next article!

Until next time…

-C