NVIDIA Quantum MQM8700-HS2R 200G InfiniBand Switch | 40-Port 16Tb/s Managed Switch with C2P Airflow

Détails sur le produit:

Nom de marque: Mellanox
Numéro de modèle: MQM8700-HS2R (920-9B110-00RH-0M0)
Document: MQM8700 series.pdf

Conditions de paiement et expédition:

Quantité de commande min: 1 pièces
Prix: Negotiate
Détails d'emballage: Boîte extérieure
Délai de livraison: Basé sur l'inventaire
Conditions de paiement: T/T
Capacité d'approvisionnement: Fourniture par projet / lot
meilleur prix Contact

Détail Infomation

Numéro de modèle.: MQM8700-HS2R taux de transmission: 10/100/1000Mbps
Ports: ≧ 48 Fonction: LACP, support empilable, VLAN
Technologie: Infiniband Commutateur Radix: 40 ports non bloquants
Mettre en évidence:

NVIDIA Quantum InfiniBand switch 200G

,

40-port managed network switch 16Tb/s

,

Mellanox InfiniBand switch C2P airflow

Description de produit

NVIDIA Quantum QM8700 Series: 200G InfiniBand Smart Switch

High‑performance fixed‑configuration switch delivering 40 ports of 200Gb/s (or 80 ports of 100Gb/s) with 16 Tb/s non‑blocking throughput, in‑network computing acceleration, and ultra‑low latency — purpose‑built for HPC, AI clusters, and hyperscale data centers.

Model: MQM8700-HS2R C2P Airflow | AC PSU | Managed 1U Rackmount
Total Bandwidth
16 Tb/s
non‑blocking
Port Speed
200Gb/s
per port / 100Gb/s splitting
Port Density
40 QSFP56
or 80x 100Gb/s
Latency
Sub-130ns
cut‑through switching
SHARP™ Support
In‑network compute
collective acceleration
Product Overview

The NVIDIA Quantum QM8700 series (including MQM8700‑HS2R) represents a new class of 200G InfiniBand smart switches, engineered to eliminate bottlenecks in AI, high‑performance computing, and cloud storage environments. With up to forty 200Gb/s ports in a compact 1U form factor, the switch delivers 16 Tb/s aggregate throughput with cut‑through latency below 130 nanoseconds. Built on NVIDIA’s scalable InfiniBand architecture, the MQM8700‑HS2R features an embedded x86 dual‑core processor, integrated Subnet Manager (up to 2,000 nodes), and support for NVIDIA SHARP™ technology, accelerating collective operations by offloading communication from servers to the network fabric.

Designed for extreme flexibility, each 200Gb/s QSFP56 port can be split into two independent 100Gb/s ports, doubling the radix for dense top‑of‑rack deployments. The MQM8700‑HS2R variant (C2P airflow) is a fully managed switch running MLNX‑OS®, ideal for enterprises seeking high‑performance networking with simple out‑of‑the‑box management, CLI, WebUI, SNMP, and JSON APIs.

Key Features
  • 200Gb/s InfiniBand per port – forty QSFP56 ports supporting 200G or 100G split modes, non‑blocking architecture.
  • In‑Network Computing Acceleration – NVIDIA SHARP™ technology enables in‑switch data aggregation, reducing MPI, NCCL, and SHMEM communication time by orders of magnitude.
  • High Radix & Split Capability – Convert 40x 200G ports into 80x 100G ports for double‑density topologies without extra switches.
  • Advanced Congestion Management – Adaptive routing, static routing, and quality of service (QoS) to eliminate hot spots and maximize effective fabric bandwidth.
  • Integrated Subnet Manager – On‑board SM supports up to 2,000 nodes for quick cluster bring‑up; fully managed via MLNX‑OS.
  • Redundant & Hot‑Swappable PSU – 1+1 redundant power, 80 Plus Gold certified, ENERGY STAR compliant, with power optimization on partial port usage.
  • Comprehensive Management – CLI, WebUI, SNMP, JSON, plus optional UFM™ for external advanced fabric orchestration (MQM8790 variant).
  • Backward Compatible – Seamless interoperability with previous InfiniBand generations (EDR, FDR).
Technology: In‑Network Computing & SHARP™

Unlike conventional switches that only forward packets, NVIDIA Quantum switches embed scalable hierarchical aggregation and reduction protocol (SHARP) engines directly in the silicon. Data traversing the switch can be processed — aggregated, reduced, or broadcast — without multiple round‑trips to server endpoints. This dramatically accelerates collective operations like all‑reduce, barrier, and broadcast, which are critical for deep learning frameworks (TensorFlow, PyTorch via NCCL) and MPI‑based HPC simulations. The result is up to 10x performance gains for communication‑intensive workloads and reduced CPU overhead, freeing compute resources for actual application processing.

The MQM8700‑HS2R also supports adaptive routing and congestion control algorithms that automatically balance traffic across multiple paths, delivering near‑line‑rate throughput even under high contention.

Typical Deployments
  • AI & Machine Learning Clusters – Large‑scale GPU‑based systems (NVIDIA DGX, HGX) requiring 200G interconnect with SHARP for NCCL acceleration.
  • High‑Performance Computing (HPC) – Research labs, national labs, and universities running MPI workloads, weather simulation, and computational fluid dynamics.
  • Hyperscale & Cloud Data Centers – Fat‑tree, DragonFly+, and multi‑dimensional torus topologies for scalable, high‑bisection bandwidth fabrics.
  • Enterprise & Financial Services – Ultra‑low latency trading platforms and database acceleration requiring predictable network performance.
  • Top‑of‑Rack (ToR) & End‑of‑Row – Double‑density 100Gb/s per server connectivity using split port capability.
Compatibility

The QM8700 series works seamlessly with NVIDIA ConnectX‑6, ConnectX‑7, and BlueField DPU adapters, supporting both InfiniBand and mixed fabrics. It is backward compatible with previous InfiniBand speeds (EDR 100Gb/s, FDR 56Gb/s). Fully interoperable with existing NVIDIA Quantum fabric switches, and managed via unified fabric manager (UFM) for telemetry and predictive monitoring. Operating system support includes major Linux distributions (RHEL, Ubuntu, Rocky Linux) and NVIDIA certified GPU servers.

Specifications
Parameter Detail
Model Number MQM8700-HS2R
Ports & Speed 40 QSFP56 ports; up to 200Gb/s per port; supports split into 80 ports of 100Gb/s
Aggregate Throughput 16 Tb/s non‑blocking
Switching Latency < 130ns (cut‑through)
Management Fully managed, on‑board x86 dual core CPU (Broadwell ComEx D‑1508 2.2GHz), 8GB system memory; MLNX‑OS, CLI, WebUI, SNMP, JSON, Subnet Manager integrated
Power Supply 1+1 redundant hot‑swappable, 100‑127VAC / 200‑240VAC, 80 Plus Gold, ENERGY STAR
Airflow C2P (power‑to‑port) — MQM8700-HS2R, standard depth
Dimensions (HxWxD) 1.7 x 17 x 23.2 in (43.6 x 433.2 x 590.6 mm), 1U
Weight With 2 PSUs: 12.48 kg / 27.5 lbs
Operating Temperature 0°C to 40°C
Certifications CE, FCC, VCCI, ICES, RCMS, RoHS compliant
Warranty 1 year limited hardware warranty (extendable options available)
Selection Guide
Orderable Part Number (OPN) Description Airflow Management
MQM8700-HS2R NVIDIA Quantum 200Gb/s InfiniBand switch, 40 QSFP56, dual AC PSU, x86 dual core, standard depth, C2P airflow, rail kit C2P (Power to Port) Managed (MLNX-OS)
MQM8700-HS2F Same as above but P2C airflow (port‑to‑power) P2C Managed
MQM8790-HS2F Unmanaged variant, P2C airflow, suitable for UFM external management P2C Externally managed (UFM ready)
MQM8790-HS2R Unmanaged, C2P airflow C2P Externally managed

Note: For high‑density 100G deployments using split cables, both managed and unmanaged SKUs provide the same port flexibility. Choose MQM8700‑HS2R for integrated subnet management and full OS access with C2P airflow configuration.

Advantages Over Traditional Switching
  • Superior ROI – Reduce capital expenditure with double‑density 100G port capacity and lower switch count for large fabrics.
  • Energy Efficient – Dynamic power scaling based on port utilization, lowering operational costs.
  • SHARP™ Acceleration – Up to 10x faster collective communications without consuming host CPU cycles.
  • Simplified Management – On‑board Subnet Manager eliminates need for external SM servers for clusters up to 2,000 nodes.
  • Scalable Topologies – Native support for Fat Tree, DragonFly+, and Torus to future‑proof data center growth.
  • Proven Ecosystem – Backed by NVIDIA cumulative software stack and 24/7 partner support.
Service & Support

Starsurge Group provides end‑to‑end lifecycle services for NVIDIA Quantum switches, including pre‑sales architecture consulting, proof‑of‑concept testing, and global logistics. Our experienced technical team offers remote troubleshooting, firmware upgrades, and RMA coordination. Warranty extension options and 24x7 priority support available upon request. Multilingual support for EMEA, Americas, and APAC regions ensures rapid response for mission‑critical deployments.

Frequently Asked Questions
Q: What is the difference between MQM8700-HS2R and MQM8790?
A: MQM8700-HS2R includes an on‑board subnet manager and full MLNX‑OS for standalone management; MQM8790 is an unmanaged switch designed for external management via NVIDIA UFM or third‑party fabric managers.
Q: Can I use this switch with 100Gb/s ConnectX‑6 adapters?
A: Yes, each 200Gb/s port can operate at 100Gb/s using splitter cables or QSFP56 to dual QSFP56 breakout, supporting up to 80x 100Gb/s links.
Q: Does it support both InfiniBand and Ethernet?
A: QM8700 series is purpose‑built for InfiniBand, delivering RDMA, deterministic latency and in‑network computing. For Ethernet, please refer to NVIDIA Spectrum series.
Q: What type of cables are compatible?
A: Passive/active copper QSFP56 DACs, active optical cables (AOC), and optical modules for distances up to hundreds of meters.
Q: Is the switch RoHS compliant?
A: Yes, fully RoHS compliant and energy‑efficient certified.
Precautions & Installation Notes
  • Ensure ambient operating temperature remains between 0°C and 40°C; maintain proper rack ventilation.
  • Use only qualified QSFP56 optics or DAC cables listed in NVIDIA compatibility guide.
  • Airflow direction: MQM8700-HS2R uses C2P (power‑to‑port) — confirm your rack cooling scheme matches C2P airflow.
  • Power supply must be connected to appropriate AC voltage (100‑240VAC) with grounding.
  • Firmware updates: always backup configuration before upgrading via MLNX‑OS.
  • Weight ~12.5kg with two PSUs — use proper mechanical lift for rack mounting.
About Starsurge Group

Hong Kong Starsurge Group Co., Limited is a technology‑driven provider of network hardware, IT services, and system integration solutions. Founded in 2008, the company serves customers worldwide with products including network switches, NICs, wireless access points, controllers, cabling, and infrastructure equipment. Backed by an experienced sales and technical team, Starsurge supports industries such as government, healthcare, manufacturing, education, finance, and enterprise.

With a customer‑first approach, Starsurge focuses on reliable quality, responsive service, and tailored solutions. As an authorized partner for leading networking brands, we deliver global logistics, custom software development, and multilingual support — helping clients build efficient, scalable, and dependable network infrastructure.

NVIDIA Quantum MQM8700-HS2R 200G InfiniBand Switch | 40-Port 16Tb/s Managed Switch with C2P Airflow 0
Key Facts At a Glance
SHARP™ Technology
Embedded
Collective offload
Port Splitting
40→80x100G
Double radix
Management
CLI/Web/SNMP/JSON
+ optional UFM
Subnet Manager
2,000 nodes
On‑board
Compatibility Matrix
Component Supported Models / Types
Adapters NVIDIA ConnectX‑6, ConnectX‑7, BlueField‑2 / BlueField‑3 InfiniBand
Cables & Optics QSFP56 DAC (passive up to 3m, active up to 5m), AOC, optical transceivers (SR4, LR4)
Operating Systems Linux (RHEL 8/9, Ubuntu 20.04/22.04, Rocky Linux), Windows Server with InfiniBand stack
Management Platforms MLNX-OS, NVIDIA UFM, Prometheus/Grafana via SNMP exporter
Topology Support Fat Tree, DragonFly+, 2D/3D Torus, SlimFly
Buyer Checklist
  • Confirm airflow direction: MQM8700-HS2R uses C2P (power‑to‑port) — verify rack cooling compatibility.
  • Verify required port speed (200G native or 100G breakout) and cable assembly type.
  • Check power input: dual redundant AC with C13/C14 connectors.
  • Ensure rack depth supports 23.2 inches (standard depth).
  • Plan for Subnet Manager: on‑board SM covers up to 2000 nodes; larger clusters may need additional SM instances.
  • Validate software license requirements for advanced features (UFM optional).
Related Products
  • NVIDIA Quantum QM9700 Series (NDR 400G InfiniBand)
  • NVIDIA ConnectX‑6 VPI Adapter Cards (100Gb/s Dual‑port)
  • NVIDIA BlueField‑3 DPU for infrastructure acceleration
  • Starsurge Custom Rack Integration Kits & QSFP56 Cables (passive/active)
  • NVIDIA UFM Telemetry Platform for large‑scale fabric management
Related Guides & Resources

Vous voulez en savoir plus sur ce produit
Je suis intéressé à NVIDIA Quantum MQM8700-HS2R 200G InfiniBand Switch | 40-Port 16Tb/s Managed Switch with C2P Airflow pourriez-vous m'envoyer plus de détails tels que le type, la taille, la quantité, le matériau, etc.
Merci!
Dans l'attente de votre réponse.