G493-ZB1-AAP1
HPC/AI Server - AMD EPYC™ 9005/9004 - 4U DP 10 x PCIe Gen5 GPUs (with PCIe switches)
· Supports up to 10 x Dual slot Gen5 GPUs
· Dual AMD EPYC™ 9005/9004 Series Processors
· 12-Channel DDR5 RDIMM, 48 x DIMMs
· Dual ROM Architecture
· 2 x 1Gb/s LAN ports via Intel® I350-AM2
· 8 x 2.5" Gen5 NVMe/SATA/SAS-4 hot-swap bays
· 4 x 2.5" SATA/SAS-4 hot-swap bays
· 2 x M.2 slots with PCIe Gen3 x4 interface
· 10 x FHFL PCIe Gen5 x16 slots for GPUs
· 2 x LP PCIe Gen5 x16 slots on the front side
· 3+1 3000W 80 PLUS Titanium redundant power supplies
G493-SB1-AAP1
HPC/AI Server - 5th/4th Gen Intel® Xeon® Scalable - 4U DP 10 x PCIe Gen5 GPUs (with PCIe switches)
· Supports up to 10 x Dual slot Gen5 GPUs
· Dual 5th/4th Gen Intel® Xeon® Scalable Processors
· Dual Intel® Xeon® CPU Max Series
· 8-Channel DDR5 RDIMM, 32 x DIMMs
· Dual ROM Architecture
· 2 x 10Gb/s LAN ports via Intel® X710-AT2
· 12 x 3.5"/2.5" Gen5 NVMe/SATA/SAS-4 hot-swap bays
· 1 x M.2 slot with PCIe Gen3 x1 interface
· 10 x FHFL PCIe Gen5 x16 slots for GPUs
· 2 x LP PCIe Gen5 x16 slots on the front side
· 3+1 3000W 80 PLUS Titanium redundant power supplies
ESC8000A-E12
AMD EPYC™ 9004 dual-processor 4U GPU server that supports eight dual-slot GPUs, up to 24 DIMM, 11 PCIe 5.0 slots, Dual NVMe, four 3000W Titanium power supplies, OCP 3.0 and ASMB11-iKVM
- · Powerful performance: Powered by AMD EPYC™ 9004 processors with 128 Zen 4c cores, 12-channel, up to 4800 MHz DDR5 and support for a maximum TDP of up to 400 watts per socket
- · AI and HPC workloads ready: Up to eight dual-slot active or passive GPUs, NVIDIA NVLink® bridge, and NVIDIA Bluefield DPU support to enable performance scaling
- · Power-efficient system design: Independent CPU and GPU airflow tunnels for thermal optimization and support for up to four 3000W Titanium redundant power supplies for uninterrupted operation
- · Cooling solutions: Enhanced air cooling based on CPU TDP for versatile workloads
- · Scale-up storage and expansion design: A total of eight bays in combination of Tri-Mode NVMe/SATA/SAS drives on the front panel and 11 PCIe 5.0 slots for higher bandwidth and system upgrade
- · Flexible networking module design: Optional OCP 3.0 module with PCIe 5.0 slot in rear panel for faster connectivity
- · Enhanced IT-infrastructure management: ASUS ASMB11-iKVM remote control with ASPEED AST2600, ASUS Control Center IT management software and hardware-level Root-of-Trust solution
- · NVIDIA-Certified Systems™ - OVX Servers: Optimized for NVIDIA OVX™ L40 Server with 8 GPUs configuration
ESC8000-E11
4U dual-socket GPU server powered by 5th Gen Intel Xeon Scalable processors that supports eight dual-slot GPUs, 32 DIMM, 11 PCIe 5.0, dual NVMe, four 3000W Titanium power supplies, OCP 3.0, and ASUS ASMB11-iKVM
- · Powerful performance: Powered by 5th Gen Intel® Xeon® Scalable processors unleash up to 21%-greater general-purpose performance per watt, significantly improving AI inference and training
- · AI and HPC workloads ready: Up to eight dual-slot active or passive GPUs, NVIDIA NVLink® bridge, and NVIDIA Bluefield DPU support to enable performance scaling
- · Power-efficient system design: Independent CPU and GPU airflow tunnels for thermal optimization and support for up to four 3000W Titanium redundant power supplies for uninterrupted operation
- · Cooling solutions: Enhanced air cooling based on CPU TDP for versatile workloads
- · Scale-up storage and expansion design: A total of eight bays in combination of Tri-Mode NVMe/SATA/SAS drives on the front panel and 11 PCIe 5.0 slots for higher bandwidth and system upgrade
- · Flexible networking module design: Optional OCP 3.0 module with PCIe 5.0 slot in rear panel for faster connectivity
- · Enhanced IT-infrastructure management: ASUS ASMB11-iKVM remote control with ASPEED AST2600, ASUS Control Center IT management software and hardware-level Root-of-Trust solution
- · NVIDIA-Certified Systems™ - OVX Servers: Optimized for NVIDIA OVX™ L40S Server with 8 GPUs configuration