Tag Archives: Ryzen

AMD Ryzen 9 3950X Folding@Home Review: Part 1: PPD vs # of Threads

Welcome back everyone. Over the last month, I’ve been experimenting with my new Folding@Home benchmark machine to see how effectively AMD’s flagship Ryzen processor (Ryzen 9 3950X) can fight diseases such as COVID-19, Cancer, and Alzheimer’s. I’ve been running Folding@Home, a charitable distributed computing project, which provides scientists with valuable computing resources to study diseases and learn how to combat them.

This blog is typically focused on energy efficiency, where I try to show how to do the most science for the least amount of power consumption possible. In this post, I’m stepping away from that (at least for now) in order to understand something much simpler: how does the Folding@Home CPU client scale with # of processor threads?

I’d previously investigated Folding@Home performance and efficiency vs. # of CPU cores on an old Intel Q6600. I’ve also done a few CPU articles on AMD’s venerable Phenom II X6 1000T and my previous processor, the AMD FX-8320e. These CPU articles were few and far-between however, as I typically focus on using graphics cards (GPUs). The reason is twofold. Historically, graphics cards have produced many more points per day (PPD) for a given amount of power, thanks to their massively parallel architecture, which is well-suited for running single precision molecular dynamics problems such as those used by Folding@Home. Also, graphics cards are much easier to swap out, so it was relatively easy to make a large database of GPU performance and efficiency.

Still, CPU folding is just as important, because there are certain classes of problems that can only be efficiently computed on the CPU. Folding@Home, while originally a project that ran exclusively on CPUs, obtains the bulk of its computational power from GPU donors these days. However, the CPU folders sill play a key part, running work units that cannot be solved on GPUs, thus providing a complete picture of the molecular dynamics.

In my last article, I highlighted the need for me to build a new benchmark machine for testing out GPUs, since my old rig would soon become a bottleneck and slow the GPUs down (thus potentially affecting any comparison plots I make). Now that this Ryzen-based 16-core monster of a desktop is complete, I figured I’d revisit CPU folding once more to see just what a modern enthusiast-class processor like the $749 Ryzen 9 3950X is capable of. For this first part of a multi-part review, I am simply looking at the preliminary results from running Folding@Home on the CPU. Instead of running with the default thread settings, I manually set up the client, examining just how performance results scale from the 1 to 32 available threads on the Ryzen 9 3950x.

Test Setup

Testing was performed in Windows 10 Home, using the latest Folding@Home client (7.6.13). Points Per Day were estimated from the client window for each setting of # of CPU threads. These instantaneous estimates have a lot of variability, so future testing will investigate the effect of averaging (running multiple tests at each setting) on the results.

Benchmark Machine Hardware:

Case Raidmax Sagitta (2006)
Power Supply Seasonic Prime 750 Titanium
Fresh Air 2 x 120 mm Enermax Front Intake
Rear Exhaust 1 x 120 mm Scythe Gentile Typhoon
Side Exhaust 1 x 80 mm Noctua
Top Exhaust 1 x 120 mm (Seasonic PSU)
CPU Cooler Noctua NH-D15 SE AM4
Thermal Paste Arctic MX-4
CPU AMD Ryzen 9 3950X 16 Core 32 Thread (105W TDP)
Motherboard ASUS Prime X570-P Socket AM4
Memory 32 GB (4 x 8 GB) Corsair Vengeance LPX DDR4 3600 MHz
GPU Zotac Nvidia GeForce 1650
OS Drive Samsung 970 Evo Plus 512 GB NVME SSD
Storage #1 Samsung 860 Evo 2 TB SSD
Storage #2 Western Digital Blue 256 GB NVME SSD (for Linux)
Optical Samsung SH-B123L Blu-Ray Drive
OS Windows 10 Home, Ubuntu Linux (on 2nd NVME)

Processor Settings:

The AMD Ryzen 9 3950x is a beast. With 16 cores and 32 threads, it has a nominal power consumption of 105 watts, but can easily double that when overclocked. With the factory Core Performance Boost (CPB) enabled, the processor will routinely draw 150+ watts when loaded due to the individual cores turboing as high as 4.7 GHz, up from the 3.5 GHz base clock. Under heavy multi-threaded work loads, the processor supports an all-core overclock of up to 4.3 GHz, assuming sufficient cooling and motherboard power delivery.

This automatic core turbo behavior is problematic for creating a plot of folding at home performance (PPD) vs # of threads, since for lightly threaded loads, the processor will scale up individual cores to much higher speeds. In order to make an apples to apples comparison, I disabled CPB, so that all CPU cores run at the base speed of 3.5 GHz when loaded. In future testing, I will perform this study with CPB on in order to see the effect of the factory automatic overclocking.

A note about Cores vs. Threads

Like many Intel processors with Hyper-Threading, AMD supports running multiple code execution strings (known as threads) on one CPU core. The Simultaneous Multi-Threading (SMT) on the Ryzen 9 3950x is simply AMD’s term for the same thing: a doubling of certain parts within each processor core (or sometimes the virtualization of multiple threads within one CPU core) to allow multiple thread execution (two threads per core, in this case). The historical problem with both Hyper-Threading and SMT is that it does not actually double a CPU core’s capacity to perform complex floating point mathematics, since there is only one FPU per CPU core. SMT and Hyperthreading work best when there is one large job hogging a core, and the smaller job can execute in the remaining part of the core as a second thread. Two equally intensive threads can end up competing for resuorses within a core, making the SMT-enabled processor actually slower. For example: https://www.techspot.com/review/1882-ryzen-9-smt-on-vs-off/

For the purposes of this article, I left SMT on in order to make the coolest plot possible (1-32 threads!). However, I suspect that SMT might actually hurt Folding@Home performance, for the reasons mentioned above. Thus in future testing, I will also try disabling this to see the effect.

Preliminary Results: PPD vs # Threads on Ryzen 9 3950x

So, to summarize the caveats, this test was performed once under each test condition (# of threads), so there are 32 data points for 32 threads. SMT was on (so Folding@Home can run two threads on one CPU core). CPB was off (all cores set to 3.5 GHz).

The figure below shows the results. As you can see, there is a general trend of increasing performance with # of threads, up to around the halfway point. Then, the trend appears to get messy, although by the end of the plot, it is clear that the higher thread counts realize a higher PPD.

Ryzen 9 3950X PPD vs Thread Count 1

Observations

It is clear that, at least initially, adding threads to the solution makes a fairly linear improvement in points per day. Eventually, however, the CPU cores are likely becoming saturated, and more of the work is being executed in via SMT. Due to the significant work unit variability in Folding@Home (as much as 10-20% between molecules), these results should be taken with a grain of salt. I am currently re-running all of these tests, so that I can show a plot of average PPD vs. # of Threads. I am also logging power using my watt meter, so that we can make wall power consumption and efficiency plots.

Conclusions

Seeing a processor produce nearly half a million points per day in Folding@Home was insane! My previous testing with old 4, 6, and 8-core processors was lucky to show numbers over 20K PPD. In general, allowing Folding@Home to use more processor threads increases performance, but there is significant additional work needed to verify a statistical trend. Stay tuned for Part II (averaging).

P.S.

Man, that’s a lot of cores! You’d better be scared, COVID-19…I’m coming for you!

Cores!

So Many Cores!

New Folding@Home Benchmark Machine: It’s RYZEN TIME!

Folding@Home, the distributed computing project that fights diseases such as COVID-19 and cancer, has hit an all-time high in popularity. I’m stunned to find that my blog is now getting more views every day than it did every month last year. With that said, this is a perfect opportunity to reach out and see if all the new donors are interested in tuning their computers for efficiency, to save a little on power, lighten the burden on your wallet, and hopefully produce nearly the same amount of science. If this sounds interesting to you, let me know in the comments below!

In my last post, I noted that the latest generation of graphics cards are starting to push the limits of what my primary GPU Folding@Home benchmark rig can do. That computer is based on an 11-year-old chipset (AMD 880), and only supports PCI-Express 2.0. In order for me to keep testing modern fast graphics cards in Windows 10, I wanted to make sure that PCI-Express slot bandwidth wasn’t going to artificially bottleneck me.

So, without further ado, let me present the new, re-built Folding@Home rig, SAGITTA:

Sagitta Desktop

I’ve (re)created a monster!

This build leverages the Raidmax Sagitta case that I’ve had since 2006. This machine has hosted multiple builds (Pentium D 805, Core 2 Duo e8600, Core 2 Quad Q6600, Phenom II X6 1100T, and the most recent FX-8320e Bulldozer). There have been too many graphics cards to count, but the latest one (Nvidia GTX 1650 by Zotac) was carried over for some continuity testing. The case fans and power supply (initially) were also the same since the previous FX build (they aren’t the same ones from back in 2006…those got loud and died long ago). I also kept my Blu-Ray drive and 3.5 inch card reader. That’s where the similarities end. Here is a specs comparison:

Sagitta Rebuild Benchmark Machine Specs

  • Note I ended up updating the power supply to the one shown in the table. More on that below…

System Power Consumption

Initially, the power consumption at idle of the new Ryzen 9 build, measured with my P3 Kill A Watt Meter, was 86 watts. The power consumption while running GPU Folding was 170 watts (and the all-core CPU folding was over 250 watts, but that’s another article entirely).

Using the same Nvidia GeForce GTX 1650 graphics card, these idle and GPU folding power numbers were unfortunately higher than the old benchmark machine, which came in at 70 watts idle and 145 watts load. This is likely due to the overkill hardware that I put into the new rig (X570 motherboards alone are known to draw twice the power of a more normal board). The system’s power consumption difference of 25 watts while folding was especially problematic for my efficiency testing, since new plots compared to graphics cards tested on the old benchmark machine would not be comparable.

To solve this, I could either:

A: Use a 25 watt offset to scale the new GPU F@H efficiency plots

B: Do nothing and just have less accurate efficiency comparisons to previous tests

C: Reduce the power consumption of the new build so that it matches the old one

This being a blog about energy efficiency, I decided to go with Option C, since that’s the one that actually helps the environment. Lets see if we can trim the fat off of this beast of a computer!

Efficiency Boost #1: Power Supply Upgrade

The first thing I tried was to upgrade the power supply. As noted here, the power supply’s efficiency rating is a great place to start when building an energy efficient machine. My old Seasonic X-650 is a very good power supply, and caries an 80+ Gold rating. Still, things have come a long way, and switching to an 80+ Titanium PSU can gain a few efficiency percentage points, especially at low loads.

80+ Table

80+ Efficiency Table

With that 3-5% efficiency boost in mind, I picked up a new Seasonic 750 Watt Prime 80+ Titanium modular power supply. At $200, this PSU isn’t cheap, but it provides a noticeable efficiency improvement at both idle and load. Other nice features were the additional 100 watts of capacity, and the fact that it supported my new motherboard’s dual pin (8 + 4) CPU aux power connection. That extra 4-pin isn’t required to make the X570 board work, but it does allow for more overclocking headroom.

Disclaimer: Before we get into it, I should note that these power readings are “eyeball” readings, taken by glancing at the watt meter and trying to judge the average usage. The actual number jumps around a bit (even at idle) as the computer executes various background tasks. I’d say the measurement precision on any eyeball watt meter readings is +/- 5 watts, so take the below with a grain of salt. These are very small efficiency improvements that are difficult to measure, and your mileage may vary. 

After upgrading the power supply, idle power dropped an impressive 10 watts, from 86 watts to 76. This is an awesome 11% efficiency improvement. This might be due to the new 80+ Titanium power supply having an efficiency target at very low loads (90% efficiency at 10% load), whereas the old 80+ Gold spec did not have a low load efficiency requirement. Thus, even though I used a large 750 watt power supply, the machine can still remain relatively efficient at idle.

Under moderate load (GPU folding), the new 80+ titanium PSU provided a 4% efficiency improvement, dropping the power consumption from 170 watts to 163. This is more in line with expectations.

Efficiency Boost #2: Processor Underclock / Undervolt

Thanks to video gaming mentality, enthusiast-grade desktop processors and motherboards are tuned out of the box for performance. We’re talking about blistering fast, competition-crushing benchmark scores. For most computing tasks (such as running Folding@Home on a graphics card), this aggressive CPU behavior is wasting electricity while offering no discernible performance benefit. Despite what my kid’s shirt says, we need to reel these power hungry CPUs in for maximum GPU folding efficiency.

Never Slow Down

Kai Says: Never Slow Down

One way to improve processor efficiency is to reduce the clock rate and associated voltage. I’d previously investigated this here. It takes exponentially more voltage to support high frequencies, so just by dropping the clock rate by 100 MHz or so, you can lower the voltage a bunch and save on power.

With the advent of processors that up-clock and up-volt themselves (as well as going in the other direction), manual tuning can be a bit more difficult. It’s far easier to first try the automatic settings, to see if some efficiency can be gained.

But wait, this is a GPU folding benchmark rig? Why does the CPU’s frequency and power settings matter?

For GPU folding with an Nvidia graphics card, one CPU core is fully loaded per GPU slot in order to “feed” the card. This is because Nvidia’s implementation of open CL support using a polling (checking) method. In order to keep the graphics card chugging along, the CPU constantly checks on the GPU to see if it needs any data. This polling loop is not efficient and burns unnecessary power. You can read more about it here: https://foldingforum.org/viewtopic.php?f=80&t=34023. In contrast, AMD’s method (interrupts) is a much more graceful implementation that doesn’t lock up a CPU core.

The constant polling loop drives modern gaming-oriented processors to clock up their cores unnecessarily. For the most part, the GPU does not need work at every waking moment. To save power, we can turn down the frequency, so that the CPU is not constantly knocking on the GPU’s metaphorical door.

To do this, I disabled AMD’s Core Performance Boost (CPB) in the AMD Overclocking section of the BIOS (same thing as Intel’s Turbo Boost). This caps the processor speed at the base maximum clock rate (3.5 GHz for the Ryzen 9 3950x), and also eliminates any high voltage values required to support the boost clocks.

Success! GPU folding total system power consumption is now much lower. With less superfluous power draw from the CPU, the wattage is much more comparable to the old Bulldozer rig.

Ryzen 9 3950x Power Reduction Table

It is interesting that idle power consumption came down as well. That wasn’t expected. When the computer isn’t doing anything, the CPU cores should be down-clocked / slept out. Perhaps my machine was doing something in the background during the earlier tests, thus throwing the results off. More investigation is needed.

GPU Benchmark Consistency Check

I fired up GPU folding on the Nvidia GeForce GTX 1650, a card that I have performance data for from my previous benchmark desktop. After monitoring it for a week, the Folding@Home Points Per Day performance was so similar to the previous results that I ended up using the same value (310K PPD) as the official estimate for the 1650’s production. This shows that the old benchmark rig was not a bottleneck for a budget card like the GeForce GTX 1650.

Using the updated system power consumption of nominally 140 watts (vs 145 watts of the previous benchmark machine), the efficiency plots (PPD/Watt) come out very nearly the same. I typically consider power measurements of + / – 5 watts to be within the measurement accuracy of my eyeball on the watt meter anyway, due to normal variations as the system runs. The good news is that even with this variation, it doesn’t change the conclusion of the figure (in terms of graphics card efficiency ranking).

GTX 1650 Efficiency on Ryzen 9

* Benchmark performed on updated Ryzen 9 build

Conclusion

I have a new 16-core beast of a benchmark machine. This computer wasn’t built exclusively for efficiency, but after a few tweaks, I was able to improve energy efficiency at low CPU loads (such as Windows Idle + GPU Folding).

For most of the graphics cards I have tested so far, the massive upgrade in system hardware will not likely affect performance or efficiency results. Very fast cards, such as the 1080 Ti, might benefit from the new benchmark rig’s faster hardware, especially that PCI-Express 4.0 x16 graphics card slot. Most importantly, future tests of blistering fast graphics cards (2080 Ti, 3080 Ti, etc) will probably not be limited by the benchmark machine’s background hardware.

Oh, I can also now encode my backup copies of my blu-ray movies at 40 fps in H.265 in Handbrake (old speed was 6.5 fps on the FX-8320e). That’s a nice bonus too.

Efficiency Note (for GPU Folding@Home Users)

Disabling the automatic processor frequency and voltage scaling (Turbo Boost / Core Performance Boost) didn’t have any effect on the PPD being generated by the graphics card. This makes sense; even relatively slow 2.0 GHz CPU cores are still fast enough to feed most GPUs, and my modern Ryzen 9 at 3.5 GHz is no bottleneck for feeding the 1650. By disabling CPB, I shaved 23 watts off of the system’s power consumption for literally no performance impact while running GPU folding. This is a 16 percent boost in PPD/Watt efficiency, for free!

This also dropped CPU temps from 70 degrees C to 55, and resulted in a lower CPU cooler fan speed / quieter machine. This should promote longevity of the hardware, and reduce how much my computer fights my air conditioning in the summer, thus having a compounding positive effect on my monthly electric bill.

Future Articles

  • Re-Test the 1080 Ti to see if a fast graphics card makes better use of the faster PCI-Express bus on the AM4 build
  • Investigate CPU folding efficiency on the Ryzen 9 3950x

 

Shout out to the helpers…Kai and Sam