NVIDIA ConnectX-6 MCX653105A-HDAT 200Gb/s Single-Port InfiniBand Smart Adapter with Hardware Encryption & PCIe 4.0
Détails sur le produit:
| Nom de marque: | Mellanox |
| Numéro de modèle: | MCX653105A-HDAT |
| Document: | connectx-6-infiniband.pdf |
Conditions de paiement et expédition:
| Quantité de commande min: | 1 pièces |
|---|---|
| Prix: | Negotiate |
| Détails d'emballage: | Boîte extérieure |
| Délai de livraison: | Basé sur l'inventaire |
| Conditions de paiement: | T/T |
| Capacité d'approvisionnement: | Fourniture par projet / lot |
|
Détail Infomation |
|||
| Statut des produits: | Action | Application: | Serveur |
|---|---|---|---|
| Condition: | Nouveau et original | Taper: | Filaire |
| Vitesse maximale: | Jusqu'à 200 Go / s | Connecteur Ethernet: | QSFP56 |
| Modèle: | MCX653105A-HDAT | ||
| Mettre en évidence: | NVIDIA ConnectX-6 InfiniBand adapter,200Gb/s PCIe 4.0 network card,InfiniBand adapter with hardware encryption |
||
Description de produit
200Gb/s Single-Port HDR Smart Adapter with In-Network Computing & Hardware Encryption
The NVIDIA ConnectX-6 MCX653105A-HDAT delivers full 200Gb/s throughput on a single QSFP56 port, combining ultra-low latency, hardware offloads, and block-level XTS-AES encryption. Designed for HPC, AI clusters, and NVMe-oF storage, this PCIe 4.0 x16 adapter offloads collective operations, RDMA, and encryption from the CPU, maximizing application performance and scalability in demanding data center environments.
The MCX653105A-HDAT belongs to the NVIDIA ConnectX-6 InfiniBand adapter family, engineered for extreme performance in modern data centers. This single-port QSFP56 card supports up to 200Gb/s (HDR InfiniBand or 200GbE) with full hardware acceleration for RDMA, reliable transport, and In-Network Computing. By integrating collective operations offloads, MPI tag matching, and NVMe over Fabrics acceleration, the adapter significantly reduces CPU overhead while boosting fabric efficiency. Its built-in AES-XTS block-level encryption ensures data security without performance penalty, making it ideal for financial services, government research, and hyperscale cloud deployments.
Up to 200Gb/s (HDR InfiniBand / 200GbE) on single QSFP56
Up to 215 million messages/sec
Block-level XTS-AES 256/512-bit, FIPS compliant
Collective offloads, NVMe-oF target/initiator offloads, burst buffer
PCIe Gen 4.0 / 3.0 x16 (backward compatible)
SR-IOV (1K VFs), ASAP2, Open vSwitch offload, overlay tunnels
RoCE, XRC, DCT, On-Demand Paging, GPUDirect RDMA support
Stand-up PCIe low-profile, tall bracket pre-installed + short bracket included
NVIDIA ConnectX-6 integrates In-Network Computing acceleration engines that offload critical datacenter operations from the host CPU. The MCX653105A-HDAT supports hardware-based reliable transport, adaptive routing, and congestion control, ensuring predictable performance in large-scale fabrics. Remote Direct Memory Access (RDMA) enables zero-copy data transfers, bypassing the OS kernel. With NVIDIA GPUDirect RDMA, GPU memory communicates directly with the network adapter, slashing latency for AI training and HPC simulations. Built-in block-level XTS-AES encryption (256/512-bit key) ensures data-in-transit and data-at-rest security with no CPU overhead, and the adapter is designed to meet FIPS 140-2 compliance requirements.
- High Performance Computing (HPC): Large-scale simulations, weather forecasting, and computational fluid dynamics requiring 200Gb/s low-latency interconnect.
- AI & Deep Learning Clusters: Distributed training with GPUDirect RDMA, maximizing throughput between GPU nodes.
- NVMe-oF Storage Systems: High-performance disaggregated storage with full target/initiator offloads, reducing CPU utilization.
- Hyperscale & Cloud Data Centers: Virtualized environments with SR-IOV, overlay networks, and hardware-accelerated encryption.
- Financial Trading Platforms: Ultra-low latency deterministic networking for algorithmic trading.
The ConnectX-6 MCX653105A-HDAT interoperates seamlessly with NVIDIA Quantum InfiniBand switches (HDR 200Gb/s), standard 200GbE switches, and a wide range of server platforms. It supports major operating systems and virtualization stacks, ensuring flexible integration into existing infrastructure.
| Parameter | Specification |
|---|---|
| Product Model | MCX653105A-HDAT |
| Data Rate | 200Gb/s, 100Gb/s, 50Gb/s, 40Gb/s, 25Gb/s, 10Gb/s, 1Gb/s (InfiniBand and Ethernet) |
| Ports & Connector | 1x QSFP56 (supports passive copper, active optical, and AOC cables) |
| Host Interface | PCIe Gen 4.0 x16 (also compatible with Gen 3.0, 2.0; supports x8, x4, x2, x1 configurations) |
| Latency | Sub-microsecond (typical <0.7µs) |
| Message Rate | Up to 215 million messages per second |
| Encryption | XTS-AES 256/512-bit hardware offload, FIPS 140-2 ready |
| Form Factor | PCIe low-profile stand-up (tall bracket pre-installed, short bracket accessory included) |
| Dimensions (without bracket) | 167.65mm x 68.90mm |
| Power Consumption | Typical 22W – 24W (depends on link utilization) |
| Virtualization | SR-IOV (up to 1K Virtual Functions), VMware NetQueue, NPAR, ASAP2 flow offload |
| Management & Monitoring | NC-SI, MCTP over PCIe/SMBus, PLDM (DSP0248, DSP0267), I2C, SPI flash |
| Remote Boot | InfiniBand, iSCSI, PXE, UEFI |
| Operating Systems | RHEL, SLES, Ubuntu, Windows Server, FreeBSD, VMware vSphere, OpenFabrics Enterprise Distribution (OFED), WinOF-2 |
| Ordering Part Number (OPN) | Ports | Max Speed | Host Interface | Key Features |
|---|---|---|---|---|
| MCX653105A-HDAT | 1x QSFP56 | 200Gb/s | PCIe 3.0/4.0 x16 | Single-port, hardware crypto, full ConnectX-6 offloads, ideal for high-density servers |
| MCX653106A-HDAT | 2x QSFP56 | 200Gb/s (dual-port) | PCIe 3.0/4.0 x16 | Dual-port 200Gb/s with crypto, maximum bandwidth density |
| MCX653105A-ECAT | 1x QSFP56 | 100Gb/s | PCIe 3.0/4.0 x16 | Single-port 100Gb/s, cost-optimized for lower speed requirements |
| MCX653106A-ECAT | 2x QSFP56 | 100Gb/s (dual-port) | PCIe 3.0/4.0 x16 | Dual-port 100Gb/s, virtualization & storage offloads |
| MCX653436A-HDAT (OCP 3.0) | 2x QSFP56 | 200Gb/s | PCIe 3.0/4.0 x16 | OCP 3.0 small form factor, dual-port 200Gb/s |
- Full 200Gb/s Bandwidth: Single-port design delivers maximum throughput for compute nodes where high density per port is prioritized.
- Hardware Security Built-in: XTS-AES block encryption without CPU overhead, meeting FIPS compliance for regulated industries.
- Accelerated Storage & AI: NVMe-oF offloads and GPUDirect RDMA significantly boost performance for AI training and software-defined storage.
- Future-Ready PCIe 4.0: Doubles interconnect bandwidth to the host, eliminating bottlenecks for 200Gb/s networking.
- Simplified Management: Unified driver stack (OFED, WinOF-2) and broad OS compatibility reduce deployment complexity.
Hong Kong Starsurge Group provides expert technical support, warranty coverage, and global RMA services for all NVIDIA ConnectX adapters. Our network specialists assist with driver installation, performance tuning, and fabric integration. We offer flexible pricing, bulk quotes for data center projects, and fast worldwide shipping. For customized solutions, contact our sales team to discuss lead times and volume discounts.
• Verify PCIe slot provides sufficient power (75W from slot; the adapter consumes ~22-24W typical).
• For liquid-cooled platforms, this standard air-cooled card is not compatible with cold plate variants; contact Starsurge for liquid-cooled SKU needs.
• Always use QSFP56-rated cables or modules to achieve 200Gb/s performance.
• Confirm driver version compatibility with your OS and kernel before deployment.
Since 2008, Hong Kong Starsurge Group Co., Limited has been a trusted provider of enterprise networking hardware, system integration, and IT services. As an authorized partner for NVIDIA networking solutions, Starsurge delivers genuine ConnectX adapters, switches, and cables to government, finance, healthcare, education, and hyperscale clients worldwide. Our experienced sales and technical teams ensure seamless deployment from pre-sales architecture to post-sales support, with a commitment to reliable quality and responsive service.
Global delivery · Multilingual support · Tailored OEM & integration services
| Component / Ecosystem | Support Status | Remarks |
|---|---|---|
| NVIDIA Quantum HDR InfiniBand Switches | ✓ Fully supported | 200Gb/s fabric, adaptive routing |
| 200GbE Switches (IEEE 802.3) | ✓ Compatible | Requires FEC modes per switch specification |
| GPU Direct RDMA | ✓ Yes | NVIDIA GPU series (Volta, Ampere, Hopper, etc.) |
| VMware vSphere 7.0/8.0 | ✓ Certified | Native drivers, SR-IOV support |
| Linux (RHEL, Ubuntu, SLES) | ✓ Full support | MLNX_OFED, inbox drivers available |
| Windows Server 2019/2022 | ✓ Supported | WinOF-2 driver package |
- [ ] Confirm required link speed: 200Gb/s single-port matches your node bandwidth requirements.
- [ ] Check server PCIe slot: x16 physical slot, Gen 4 recommended for full 200Gb/s performance.
- [ ] Select appropriate QSFP56 cables or transceivers (passive copper up to 5m, AOC, or optics).
- [ ] Verify OS driver support (OFED version or inbox).
- [ ] Ensure encryption compliance requirements are met (XTS-AES, FIPS).
- [ ] Evaluate environmental cooling: high-speed adapters may require directed airflow.







