Intelligent RDMA-enabled, single and dual-port network adapter with advanced application offload capabilities for Web 2.0, Cloud, Storage, and Telco platforms
ConnectX-5 Ethernet adapter cards provide high performance and flexible solutions with up to two ports of 100GbE connectivity, 750ns latency, up to 200 million messages per second (Mpps), and a recordsetting 197Mpps when running an open source Data Path Development Kit (DPDK) PCIe (Gen 4.0). For storage workloads, ConnectX-5 delivers a range of innovative accelerations, such as Signature Handover (T10-DIF) in hardware, an embedded PCIe Switch, and NVMe over Fabric target offloads. ConnectX-5 adapter cards also bring advanced Open vSwitch offloads to telecommunications and cloud data centers to drive extremely high packet rates and throughput with reduced CPU resource consumption, thus boosting data center infrastructure efficiency.
ConnectX-5 adapter cards are available for PCIe Gen 3.0 and Gen 4.0 servers and provide support for 1, 10, 25, 40, 50 and 100 GbE speeds in stand-up PCIe cards, OCP 2.0, and OCP 3.0 form factors. ConnectX-5 cards also offer advanced Mellanox Multi-Host® and Mellanox Socket Direct® technologies.
Cloud and Web 2.0 Environments
ConnectX-5 adapter cards enable data center administrators to benefit from better server utilization and reduced costs, power usage, and cable complexity, allowing for more virtual appliances, virtual machines (VMs) and tenants to co-exist on the same hardware.
Supported vSwitch/vRouter offload functions include:
- Overlay Networks (e.g., VXLAN, NVGRE, MPLS, GENEVE, and NSH) header encapsulation & decapsulation.
- Stateless offloads of inner packets and packet headers’ re-write, enabling NAT functionality and more.
- Flexible and programmable parser and match-action tables, which enable hardware offloads for future protocols.
- SR-IOV technology, providing dedicated adapter resources, guaranteed isolation and protection for virtual machines (VMs) within the server.
- Network Function Virtualization (NFV), enabling a VM to be used as a virtual appliance. The full datapath operation offloads, hairpin hardware capability and service chaining enables data to be handled by the virtual appliance, with minimum CPU utilization.
Cloud and Web 2.0 customers developing platforms on Software Defined Network (SDN) environments are leveraging their servers’ Operating System Virtual-Switching capabilities to achieve maximum flexibility. Open vSwitch (OvS) is an example of a virtual switch that allows Virtual Machines to communicate with each other and with the outside world. Traditionally residing in the hypervisor where switching is based on twelve-tuple matching onflows, the virtual switch, or virtual router software-based solution, is CPU-intensive. This can negatively affect system performance and prevent the full utilization of available bandwidth.
Mellanox ASAP – Accelerated Switching and Packet Processing® technology enables offloading the vSwitch/vRouter by handling the data plane in the NIC hardware, without modifying the control plane. This results in significantly higher vSwitch/vRouter performance without the associated CPU load.
Additionally, intelligent ConnectX-5’s flexible pipeline capabilities, including flexible parser and flexible match-action tables, are programmable, enabling hardware offloads for future protocols.
- Tag matching and rendezvous offloads
- Adaptive routing on reliable transport
- Burst buffer offloads for background checkpointing
- NVMe over Fabric offloads
- Backend switch elimination by host chaining
- Embedded PCIe switch
- Enhanced vSwitch/vRouter offloads
- Flexible pipeline
- RoCE for overlay networks
- PCIe Gen 4.0 support
Reviews
There are no reviews yet.