AMD EPYC Milan Review Part 2: Testing 8 to 64 Cores in a Production Platform
by Andrei Frumusanu on June 25, 2021 9:30 AM ESTTest Bed and Setup - Compiler Options
For the rest of our performance testing, we’re disclosing the details of the various test setups:
AMD - Dual EPYC 7763 / 75F3 / 7443 / 7343 / 72F3
For today’s review in terms of now performance figure, we’re now using GIGABYTE’s new MZ72-HB0 rev.3.0 board as the primary test platform for the EPYC 7763, 75F3, 7443, 7343 and 72F3. The system is running under full default settings, meaning performance or power determinism as configured by AMD in their default SKU fuse settings.
CPU | 2x AMD EPYC 7763 (2.45-3.500 GHz, 64c, 256 MB L3, 280W) / 2x AMD EPYC 75F3 (3.20-4.000 GHz, 32c, 256 MB L3, 280W) / 2x AMD EPYC 7443 (2.85-4.000 GHz, 24c, 128 MB L3, 200W) / 2x AMD EPYC 7343 (3.20-3.900 GHz, 16c, 128 MB L3, 190W) / 2x AMD EPYC 72F3 (3.70-4.100 GHz, 8c, 256MB L3, 180W) |
RAM | 512 GB (16x32 GB) Micron DDR4-3200 |
Internal Disks | Crucial MX300 1TB |
Motherboard | GIGABYTE MZ72-HB0 (rev. 3.0) |
PSU | EVGA 1600 T2 (1600W) |
Software wise, we ran Ubuntu 20.10 images with the latest release 5.11 Linux kernel. Performance settings both on the OS as well on the BIOS were left to default settings, including such things as a regular Schedutil based frequency governor and the CPUs running performance determinism mode at their respective default TDPs unless otherwise indicated.
AMD - Dual EPYC 7713 / 7662
Due to not having access to the 7713 for this review, we’re picking up the older test numbers of the chip on AMD’s Daytona platform. We also tested the Rome EPYC 7662 – these latter didn’t exhibit any issues in terms of their power behaviour.
CPU | 2x AMD EPYC 7713 (2.00-3.365 GHz, 64c, 256 MB L3, 225W) / 2x AMD EPYC 7662 (2.00-3.300 GHz, 64c, 256 MB L3, 225W) |
RAM | 512 GB (16x32 GB) Micron DDR4-3200 |
Internal Disks | Varying |
Motherboard | Daytona reference board: S5BQ |
PSU | PWS-1200 |
AMD - Dual EPYC 7742
Our local AMD EPYC 7742 system, due to the aforementioned issues with the Daytona hardware, is running on a SuperMicro H11DSI Rev 2.0.
CPU | 2x AMD EPYC 7742 (2.25-3.4 GHz, 64c, 256 MB L3, 225W) |
RAM | 512 GB (16x32 GB) Micron DDR4-3200 |
Internal Disks | Crucial MX300 1TB |
Motherboard | SuperMicro H11DSI0 |
PSU | EVGA 1600 T2 (1600W) |
As an operating system we’re using Ubuntu 20.10 with no further optimisations. In terms of BIOS settings we’re using complete defaults, including retaining the default 225W TDP of the EPYC 7742’s, as well as leaving further CPU configurables to auto, except of NPS settings where it’s we explicitly state the configuration in the results.
The system has all relevant security mitigations activated against speculative store bypass and Spectre variants.
Intel - Dual Xeon Platinum 8380
For our new Ice Lake test system based on the Whiskey Lake platform, we’re using Intel’s SDP (Software Development Platform 2SW3SIL4Q, featuring a 2-socket Intel server board (Coyote Pass).
The system is an airflow optimised 2U rack unit with otherwise little fanfare.
Our review setup solely includes the new Intel Xeon 8380 with 40 cores, 2.3GHz base clock, 3.0GHz all-core boost, and 3.4GHz peak single core boost. That’s unusual about this part as noted in the intro, it’s running at a default 205W TDP which is above what we’ve seen from previous generation non-specialised Intel SKUs.
CPU | 2x Intel Xeon Platinum 8380 (2.3-3.4 GHz, 40c, 60MB L3, 270W) |
RAM | 512 GB (16x32 GB) SK Hynix DDR4-3200 |
Internal Disks | Intel SSD P5510 7.68TB |
Motherboard | Intel Coyote Pass (Server System S2W3SIL4Q) |
PSU | 2x Platinum 2100W |
The system came with several SSDs including Optane SSD P5800X’s, however we ran our test suite on the P5510 – not that we’re I/O affected in our current benchmarks anyhow.
As per Intel guidance, we’re using the latest BIOS available with the 270 release microcode update.
Intel - Dual Xeon Platinum 8280
For the older Cascade Lake Intel system we’re also using a test-bench setup with the same SSD and OS image as on the EPYC 7742 system.
Because the Xeons only have 6-channel memory, their maximum capacity is limited to 384GB of the same Micron memory, running at a default 2933MHz to remain in-spec with the processor’s capabilities.
CPU | 2x Intel Xeon Platinum 8280 (2.7-4.0 GHz, 28c, 38.5MB L3, 205W) |
RAM | 384 GB (12x32 GB) Micron DDR4-3200 (Running at 2933MHz) |
Internal Disks | Crucial MX300 1TB |
Motherboard | ASRock EP2C621D12 WS |
PSU | EVGA 1600 T2 (1600W) |
The Xeon system was similarly run on BIOS defaults on an ASRock EP2C621D12 WS with the latest firmware available.
Ampere "Mount Jade" - Dual Altra Q80-33
The Ampere Altra system we’re using the provided Mount Jade server as configured by Ampere. The system features 2 Altra Q80-33 processors within the Mount Jade DVT motherboard from Ampere.
In terms of memory, we’re using the bundled 16 DIMMs of 32GB of Samsung DDR4-3200 for a total of 512GB, 256GB per socket.
CPU | 2x Ampere Altra Q80-33 (3.3 GHz, 80c, 32 MB L3, 250W) |
RAM | 512 GB (16x32 GB) Samsung DDR4-3200 |
Internal Disks | Samsung MZ-QLB960NE 960GB Samsung MZ-1LB960NE 960GB |
Motherboard | Mount Jade DVT Reference Motherboard |
PSU | 2000W (94%) |
The system came preinstalled with CentOS 8 and we continued usage of that OS. It’s to be noted that the server is naturally Arm SBSA compatible and thus you can run any kind of Linux distribution on it.
The only other note to make of the system is that the OS is running with 64KB pages rather than the usual 4KB pages – this either can be seen as a testing discrepancy or an advantage on the part of the Arm system given that the next page size step for x86 systems is 2MB – which isn’t feasible for general use-case testing and something deployments would have to decide to explicitly enable.
The system has all relevant security mitigations activated, including SSBS (Speculative Store Bypass Safe) against Spectre variants.
The system has all relevant security mitigations activated against the various vulnerabilities.
Compiler Setup
For compiled tests, we’re using the release version of GCC 10.2. The toolchain was compiled from scratch on both the x86 systems as well as the Altra system. We’re using shared binaries with the system’s libc libraries.
58 Comments
View All Comments
eastcoast_pete - Friday, June 25, 2021 - link
Interesting CPUs. Regarding the 72F3, the part for highest per-core performance: Dating myself here, but I recall working with an Oracle DB that was licensed/billed on a per-core or per-thread basis (forgot which one it was). Is that still the case, and what other programs (still) use that licensing approach?And, just to add perspective: the annual payments for that Oracle license dwarfed the price of the CPU it ran on, so yes, such processors can have their place.
flgt - Friday, June 25, 2021 - link
More workstation than server, but companies like Ansys still require additional license packs to go beyond 4 cores with some of their tools, and they often come with hefty 5-figure price tags depending on the program and your organizations bargaining ability.RollingCamel - Friday, June 25, 2021 - link
It was refreshing to see Midas NFX running without core limitations.The core limitations archaic and doesn't represent the current development. Unless the license policies has evolved in the past years.
realbabilu - Monday, June 28, 2021 - link
Interesting to see The fea implementation with latest math kernel available like midas nfx bench in anandtech. Hopefullly anandteam got demos from midas korea for testing. Abaqus, msc nastran, inventor fea anything will do.However i dont think midas improved their math kernel, i had midas civil and gts licensed, but cant use all threads and all memory i had On my pc, like others fea dis.
mrvco - Friday, June 25, 2021 - link
Power unit pricing! LOL, the dreaded Oracle audit when they needed to find a way to make their quarterly numbers!?!?!eek2121 - Friday, June 25, 2021 - link
It blows my mind that people still use Oracle products.Urbanfox - Sunday, June 27, 2021 - link
For a lot of things there isn't a viable alternative. The hotel industry is a great example of this with Opera and Micros.phr3dly - Friday, June 25, 2021 - link
In the EDA world we pay per-process licensing. As with your scenario, the license cost dwarfs the CPU cost, particularly over the 3-year lifespan of a server. You might easily spend 50x the cost of the server on the licenses, depending on the number of cores. Trying to optimize core speed/core count/eventual server load against license cost is a fun optimization problem.So yeah, the CPU cost is irrelevant. I'm happy to pay an extra several thousand dollars for a moderate performance improvement.
eldakka - Saturday, June 26, 2021 - link
> Oracle DB that was licensed/billed on a per-core or per-thread basis (forgot which one it was). Is that still the case, and what other programs (still) use that licensing approach?Lots of Enterprise applications still use that approach, Oracle (not just DB), IBM products - WebSphere Application Server (all flavours, standalone, ND, Liberty, etc.), messaging products like WebSphere MQ, I believe SAP uses it, many RedHat 'middleware' products (e.g. JBOSS web server, EAS, etc.) use it as well.
In the enterprise space, it is basically the expected licensing model.
And the licensing cost per-core is usually 'generation' dependant. So you usually just can't upgrade from, say, a 20-core Xeon 6th-gen to a 20-core 8th gen and expect to pay the same.
The typical model is a 'PVU', Processor Value Unit (different companies may give it a different label - and value different processors differently, but it usually boils down to the same thing). Each platform-generation (as decided by the software vendor, i.e. Oracle, or IBM, etc.) has a certain PVU per core. E.g., (making up numbers here) a POWER8 (2014ish release) might have a PVU of 4000, and an Intel Haswell/Ivy Bridge - E5/7 v2/3 I think (2014/15ish)- might be given 3500. So if using an 8-core P8 LPAR that'd be 32000 PVU, while an 8 core VI on E7 v3 would be 28000. And P9 might be 7000 PVU, a Milan might be 6000 PVU, so for 8-cores it'd be 56000 or 48000 respectively. Then there will be a doller per PVU multiplier based on the software, so software bob could be x1 per year, so in the P9 case $56k/year license, whereas software fred mught be a x4 multiplier, so $224k/year. And yes, there can be instances where software being run on a single server (not even necessarily a beefy server) can be $millions/year licensing. If some piece of software is critical, but low CPU/memory requirements, such as an API-layer running on x86 hardware that allows midrange-based apps to interface directly to mainframe apps, they can charge millions for it even if its only using combined 8-cores across 4 2-core server instances (for HA) - though in this case, where they know it requires tiny resources, they'll switch to a per-instance rather than a per-core pricing model, and charge $500k per instance for example.
The per-core PVU can even change within a generation depending on specific CPU feature sets, e.g. the 2TB per-socket limited Xeon might be 5000 PVU, but the 4TB per-socket SKU of the same generation might be 6000 PVU, just because the vendor thinks "they need lots of memory, therefore they are working on big data sets, well, obviously, that's more money!" they want their tithe.
Oh, and nearly forgot to mention, the PVU can even change depending on whether the software vendor is tring to push their own hardware. IBM might give a 'PVU' discount if you run their IBM products on IBM Power hardware versus another vendors hardware, to try and push Power sales. So even though, in theory, a P9 might be more PVU than a Xeon, but since you are running IBM DB2 on the P9, they'll charge you less money for what is conceptually a higher PVU than on say a Xeon, to nudge you towards IBM hardware. Oracle has been known to do this with their aquired SPARC (Sun)-based hardware too.
eastcoast_pete - Saturday, June 26, 2021 - link
Thanks! I must say that I am glad I don't have to deal with that aspect of computing anymore. I also wonder if people have analyzed just how severely these pricing schemes have blunted or prevented advancements in hardware capabilities?