The GIGABYTE MZ72-HB0 (Rev 3.0) Motherboard Review: Dual Socket 3rd Gen EPYC
by Gavin Bonshor on August 2, 2021 9:30 AM EST- Posted in
- Motherboards
- AMD
- Gigabyte
- GIGABYTE Server
- Milan
- EPYC 7003
- MZ720-HB0
Back in March, we reviewed AMD's latest Zen 3 based EPYC 7003 processors, including the 64-core EPYC 7763 and 7713. We've updated the data back in June with a retail motherboard, and it scores much higher, showing how EPYC Milan can be refined more than it was at launch. Putting two 64-core processors into a system requires a more than capable motherboard, and today on the test bench is the GIGABYTE MZ720-HB0 (Revision 3.0), which has plenty of features to boast about. Some of the most important ones include five full-length PCIe 4.0 slots, dual 10 GbE, lots of PCIe 4.0 NVMe and SATA storage options, as well as dual SP3 sockets, and sixteen memory slots with support for up to 4 TB of capacity.
GIGABYTE MZ72-HB0 Overview
Although the GIGABYTE MZ72-HB0 motherboard for AMD's EPYC processors fundamentally isn't new, we reported back during Computex 2021 that GIGABYTE released a new revision (Rev 3.0) of this model to support both Milan (7003) and Rome (7002) out of the box, as the initial Revision 1.0 model only included support for Naples (7001) and Rome (7002). This is due to a small maneuvering of AMD's product stack - the latest 64 core processors now push a TDP of 280 W per processor, rather than 240 W, and while the socket is the same across all three generations, you will find that motherboards either support 7001+7002, or 7002+7003 depending on when they were designed. So if you want the MZ72-HB0 to support Milan 7003 processors, you need revision 3.0, which we have today.
As with many server-focused motherboards, even in more 'standard' form factors, the GIGABYTE MZ72-HB0 focuses mainly on functionality and substance over style. GIGABYTE has opted for its typical blue-colored PCB, with the same theme stretching to the sixteen memory slots on the board. Looking at memory support, the MZ72-HB0 supports up to 2 TB per socket, in eight-channel memory mode, focusing on memory up to DDR4-3200 RDIMM, LRDIMM, and 3DS varieties all supported. As this is a dual-socket EPYC motherboard, there are two SP3 sockets with four horizontally mounted memory slots on either side, and each socket can house processors up to 280 W TDP.
Looking at connectivity, the MZ72-HB0 has five full-length PCIe 4.0 slots, with three of them supporting the full PCIe 4.0 x16 bandwidth, while the others are x8 but still full length. In order to balance the load on each CPU, three of the slots are controlled by the left CPU looking at the layout above, with the other two being controlled from the right CPU. More detail on this is on the following page where we analyze the topology of the motherboard.
On the rear panel is a basic selection of inputs, with two USB 3.0 Type-A ports, as well as a D-Sub and Gigabit Management LAN port which allow access to the BMC, which is controlled by a commonly used ASPEED AST2600 controller. Networking connectivity consists of two 10 GbE ports, while storage options are aplenty. These options include one physical PCIe 4.0 x4 M.2 slot, with two NVMe SlimSAS 4i ports, and three SlimSAS ports capable of supporting up to twelve SATA ports, or three PCIe 4.0 x4 NVMe based drives. For conventional SATA storage, the GIGABYTE has four SATA slots.
Touching on the performance, it's no surprise that the MZ72-HB0 takes a long time to boot into Windows - it took us just over two and a half minutes from powering the system onto loading into the OS. It takes this long from a cold boot as a system takes time to initialize the networking controller, the BMC, and other critical elements to make itself ready for POST. In terms of power, we measured a peak power draw at full load with dual 280 W processors of 782 W. In our DPC latency testing, the GIGABYTE didn't score that well, but that is usually par for the course with server motherboards with BMC interfaces.
For our up-to-date CPU performance numbers with this board, we tested numerous dual-socket EPYC 7003 configurations on this board, please check out the link below:
Two AMD EPYC 7763 processors running Cinebench R23 - 256 threads anyone?
In this particular market space, there's plenty of dedicated 1U server options capable of supporting one or two EPYC 7003 processors, as well as the custom market. ASUS, ASRock Rack, GIGABYTE Server and others have options to suit all manners of configurations, but there are few dual-socket options in more standard form factors like the E-ATX GIGABYTE MZ72-HB0. That makes the MZ72-HB0 interesting, as it's clear GIGABYTE Server has risen to the challenge of fitting two large SP3 sockets and five full-length PCIe 4.0 slots, along with all the other controllers and connectivity to benefit from EPYC's large PCIe lane count. There are limitations due to the smaller E-ATX form factor including 16 versus 32 memory slots, and other PCIe slots to benefit from the full 128 lanes (only 88 are used in this system), but let's get into the review and see how the GIGABYTE MZ72-HB0 Rev 3.0 handles our benchmark suite.
Read on for our extended analysis.
28 Comments
View All Comments
tygrus - Monday, August 2, 2021 - link
There are not many apps/tasks that make good use of more than the 64c/128t. Some of those tasks are better suited for GPU, accelerators or a cluster of networked systems. Some tasks just love having the TB's RAM while others will be limited by data IO (storage drives, network). YMMV. Have fun with testing it but it will be interesting to find people with real use cases that can afford this.questionlp - Monday, August 2, 2021 - link
Being capable of handling more than 64c/128t across two sockets doesn't mean that everyone will drop more than that on this board. You can install two higher clock 32c/64t processors into each socket, have shed load of RAM and I/O for in-memory databases, software-defined (insert service here) or virtualization (or a combination of those).Installer lower core count, even higher clock speed CPUs and you have yourself an immensely capable platform for per-core licensed enterprise database solutions.
niva - Wednesday, August 4, 2021 - link
You can but why would you when you can get a system where you can slot a single CPU with 64C?This is a board for the cases where 64C is clearly not enough, and really catering towards server use, for cases where less cores but more power per core are needed, there are simply better options.
questionlp - Wednesday, August 4, 2021 - link
The fastest 64c/128t Epyc CPU right now as a base clock of 2.45 GHz (7763) while you can get 2.8 GHz with a 32c/128t 7543. Slap two of those on this board, you'll get a lot more CPU power than a single 64c/128t and double the number of memory channels.Another consideration is licensing. IIRC, VMware per-CPU licensing maxes out at 32c per socket. To cover a single 64c Epyc, you would end up with the same license count as two 32c Epyc configuration. Some customers were grandfathered in back in 2020; but, that's no longer the case for new licenses. Again, you can scale better with 2 CPU configuration than 1 CPU.
It all depends on the targeted workload. What may work for enterprise virtualization won't work for VPC providers, etc.
linuxgeex - Monday, August 2, 2021 - link
The primary use case is in-memory databases and/or high-volume low-latency transaction services. The secondary use case is rack unit aggregation, which is usually accomplished with virtualisation. ie you can fit 3x as many 80-thread high performance VPS into this as you can into any comparably priced Intel 2U rack slot, so this has huge value in a datacenter for anyone selling such a VPS in volume.logoffon - Monday, August 2, 2021 - link
Was there a revision 2.0 of this board?Googer - Tuesday, August 3, 2021 - link
There is a revision 3.0 of this board.MirrorMax - Friday, August 27, 2021 - link
No and more importantly this is exactly the same board as rev1 but with a Rome/Milan bios, so you can bios update rev1 boards to rev3 basically, odd that the review doesn't touch on thisBikeDude - Monday, August 2, 2021 - link
Task Manager screenshot reminded me of Norton Speed Disk; We now have more CPUs than we had disk clusters back in the day. :PWaltC - Monday, August 2, 2021 - link
In one place you say it took 2.5 minutes to post, in another place you say it took 2.5 minutes to cold boot into Win10 pro. I noticed you used a Sata 3 connector for your boot drive, apparently, and I was reminded of booting Win7 from a Sata3 7200rpm platter drive taking me 90-120 seconds to cold boot--in Win7 the more crowded your system with 3rd-party apps and games the longer it took to boot...;) (That's not the case with Win10/11, I'm glad to say, as with TB's of installed programs I still cold boot in ~12 secs from an NVMe OS partition.) Basically, servers are not expected to do much in the way of cold booting as up time is what most customers are interested in...but I doubt the S3 drive had much to do with the 2.5 minute cold-boot time, though. An NVMe drive might have shaved a few seconds off the cold-boot, but that's about it, imo.Interesting read! Enjoyed it. Yes, the server market is far and away different from the consumer markets.