NVIDIA ConnectX-7 MFP7E10-N005 400Gb/s Dual-Port QSFP InfiniBand & Ethernet Adapter NDR, PCIe Gen5
Détails sur le produit:
| Nom de marque: | Mellanox |
| Numéro de modèle: | MFP7E10-N005 (980-9I73V-000005) |
| Document: | MFP7E10-Nxxx.pdf |
Conditions de paiement et expédition:
| Quantité de commande min: | 1 pièces |
|---|---|
| Prix: | Negotiate |
| Détails d'emballage: | Boîte extérieure |
| Délai de livraison: | Basé sur l'inventaire |
| Conditions de paiement: | T/T |
| Capacité d'approvisionnement: | Fourniture par projet / lot |
|
Détail Infomation |
|||
| Numéro de pièce: | MFP7E10-N005 (980-9I73V-000005) | Type de câble: | Cable à fibre multimode |
|---|---|---|---|
| Type de fibre: | OM4, 50/125 µm | Longueur: | 5 m |
| Connecteurs: | MPO-12 / APC (femme) | Débit de données: | Jusqu'à 400 Gbps |
| Mettre en évidence: | NVIDIA ConnectX-7 400Gb/s adapter,Dual-port QSFP InfiniBand adapter,PCIe Gen5 Ethernet adapter |
||
Description de produit
NVIDIA ConnectX‑7 MFP7E10-N005
400Gb/s NDR InfiniBand & 400GbE Adapter · PCIe Gen5 x16 · Dual-Port QSFP · In-line Security · GPUDirect® · NVMe‑oF · Advanced PTP Timing
400Gb/s
2 x QSFP · PCIe HHHL
PCIe Gen5 x16
IPsec / TLS / MACsec
Hong Kong Starsurge Group Co., Limited
Hong Kong Starsurge Group Co., Limited is a technology-driven provider of network hardware, IT services, and system integration solutions. Founded in 2008, the company serves customers worldwide with products including network switches, NICs, wireless access points, controllers, cables, and related networking equipment. Backed by an experienced sales and technical team, Starsurge supports industries such as government, healthcare, manufacturing, education, finance, and enterprise. The company also offers IoT solutions, network management systems, custom software development, multilingual support, and global delivery. With a customer-first approach, Starsurge focuses on reliable quality, responsive service, and tailored solutions that help clients build efficient, scalable, and dependable network infrastructure.
Product overview
The NVIDIA ConnectX‑7 MFP7E10-N005 is a high‑performance dual‑port 400Gb/s adapter supporting both InfiniBand (NDR, HDR, EDR) and Ethernet (400GbE, 200GbE, 100GbE, 50GbE, 25GbE, 10GbE). It leverages PCIe Gen5 x16 host interface and includes hardware accelerations for security (inline IPsec/TLS/MACsec), storage (NVMe‑oF, GPUDirect Storage), and networking (ASAP2 SDN, RoCE). Designed for the most demanding AI, HPC, and cloud environments, it delivers ultra‑low latency and exceptional throughput while reducing CPU overhead.
Dual‑port 400Gb/s flexibility
Two independent QSFP ports, each capable of 400Gb/s NDR InfiniBand or 400GbE. Supports split configurations and mixed protocol
ASAP² software‑defined networking
NVIDIA ASAP2 technology offloads overlay networks (VXLAN, GENEVE, NVGRE), connection tracking, flow mirroring, and packet rewriting. Delivers line‑rate performance with zero CPU penalty.
Precision timing & SyncE
IEEE 1588v2 PTP with 12 ns accuracy, G.8273.2 Class C, SyncE (G.8262.1), programmable PPS, and time‑triggered scheduling. Ideal for financial and 5G infrastructures.
Typical deployments
- Large‑scale AI training clusters (LLM, deep learning)
- High‑performance computing (HPC) with InfiniBand fabrics
- Cloud data centers requiring 400GbE and RoCE
- GPU‑accelerated storage (NVMe‑oF, GPUDirect Storage)
- Financial trading with ultra‑low latency and PTP timing
Compatibility
- NVIDIA Quantum / Quantum‑2 InfiniBand switches
- PCIe Gen5/Gen4/Gen3 servers (Intel/AMD)
- Major OS: RHEL, Ubuntu, Windows, VMware ESXi, Kubernetes
- Industry‑standard QSFP112 transceivers and AOC/DAC cables
Technical specifications
| Parameter | Details |
|---|---|
| Model number | MFP7E10-N005 |
| Supported protocols | InfiniBand, Ethernet |
| InfiniBand speeds | NDR 400Gb/s, HDR 200Gb/s, EDR 100Gb/s, FDR, QDR |
| Ethernet speeds | 400GbE, 200GbE, 100GbE, 50GbE, 25GbE, 10GbE |
| Number of ports | 2 x QSFP (QSFP112 compatible) |
| Host interface | PCIe Gen5 x16 (also compatible with Gen4/Gen3) |
| Form factor | PCIe HHHL (half‑height, half‑length) – bracket included |
| Interface technologies | NRZ (10G, 25G), PAM4 (50G, 100G per lane) |
| InfiniBand networking | RDMA, XRC, DCT, GPUDirect RDMA/Storage, adaptive routing, enhanced atomic ops, ODP, UMR, burst buffer offload, SHARP support |
| Ethernet offloads | RoCE, ASAP2 overlay offload (VXLAN, GENEVE, NVGRE), connection tracking, flow mirroring, header rewrite, hierarchical QoS |
| Security acceleration | Inline IPsec/TLS/MACsec (AES‑GCM 128/256), secure boot, flash encryption, device attestation, T10‑DIF offload |
| Storage protocols | NVMe‑oF, NVMe/TCP, GPUDirect Storage, SRP, iSER, NFS over RDMA, SMB Direct |
| Timing & synchronization | IEEE 1588v2 (12 ns accuracy), SyncE (G.8262.1), PPS in/out, time‑triggered scheduling, PTP packet pacing |
| Management | NC‑SI, MCTP over SMBus/PCIe, PLDM (monitor, firmware, FRU, Redfish), SPDM, SPI, JTAG |
| Remote boot | InfiniBand remote boot, iSCSI, UEFI, PXE |
| Operating systems | Linux (RHEL, Ubuntu), Windows, VMware ESXi (SR‑IOV), Kubernetes |
| Warranty | 1 year (extendable, please confirm) |
Key facts (AI extract)
- ▪ 2 x 400Gb/s NDR / 400GbE ports
- ▪ PCIe Gen5 x16 host interface
- ▪ Inline IPsec, TLS, MACsec acceleration
- ▪ GPUDirect RDMA & Storage
- ▪ NVMe‑oF / NVMe/TCP offload
- ▪ Advanced PTP / SyncE (12 ns)
- ▪ ASAP2 SDN acceleration
- ▪ SHARP in‑network computing ready
- ▪ HHHL form factor
- ▪ RoCE & overlay offload
Compatibility matrix
| Component / Platform | Compatibility |
|---|---|
| NVIDIA Quantum‑2 QM9700 / QM9790 switches | ✅ Full NDR 400Gb/s support |
| NVIDIA Quantum QM8700 (HDR) switches | ✅ 200Gb/s HDR compatible |
| PCIe Gen5 servers (Intel Eagle Stream / AMD Genoa) | ✅ Full Gen5 speed |
| PCIe Gen4 / Gen3 servers | ✅ Backward compatible (reduced speed) |
| GPUDirect & CUDA environments | ✅ Native support with NVIDIA GPUs |
| Major Linux distributions (RHEL 9.x, Ubuntu 22.04+) | ✅ In‑box drivers available |
Selection guide
MFP7E10-N005 is a dual‑port 400Gb/s PCIe Gen5 x16 adapter in HHHL form factor. For other port counts or OCP form factors, refer to the ConnectX‑7 family:
- Single‑port PCIe (MCX75310AAS)
- Dual‑port OCP 3.0 (MFP7E10‑N005 OCP variant)
- Quad‑port 100Gb/s configurations
Buyer checklist
- ✔ Confirm PCIe slot availability: x16 mechanical, Gen5 capable recommended.
- ✔ Check airflow and cooling: high‑power adapters may need active cooling.
- ✔ Select correct transceivers: 400G SR4/DR4/FR4 or AOC cables.
- ✔ Verify OS/driver support (in‑box drivers for most distributions).
- ✔ For security offloads, ensure application support for IPsec/TLS.
Why choose ConnectX‑7
Highest performance 400Gb/s with PCIe Gen5. Integrated in‑line security saves CPU and accelerates encrypted traffic. GPUDirect and NVMe‑oF offloads maximize data throughput for AI and storage. Advanced timing for 5G and financial services.
Service & support
1‑year limited hardware warranty (extendable). Technical support from Hong Kong Starsurge Group. Firmware and driver updates available. Please contact our sales team for volume pricing and extended support options.
Frequently asked questions
Important notes & precautions
- Ensure adequate cooling: high‑speed adapters generate more heat; server airflow must meet requirements.
- Use only qualified optics/cables to avoid link instability.
- PCIe Gen5 requires compatible motherboard and BIOS settings.
- Security features may require specific firmware versions; confirm with technical support.
- Specifications are typical and subject to change; confirm with order.
Related products
- ▪ NVIDIA Quantum‑2 MQM9700 Switch
- ▪ NVIDIA ConnectX‑7 MCX75310AAS (single‑port)
- ▪ NVIDIA BlueField‑3 DPU
- ▪ MCP1600 OSFP/AOC cables (400G)
Related guides / comparisons
- ▪ ConnectX‑7 vs. ConnectX‑6: performance comparison
- ▪ 400G NDR InfiniBand deployment guide
- ▪ Inline IPsec/TLS configuration white paper
- ▪ GPUDirect Storage best practices







