WEKA networking

Explore network technologies in WEKA, including DPDK, SR-IOV, CPU-optimized networking, UDP mode, high availability, and RDMA/GPUDirect Storage, with configuration guidelines.

Overview

The WEKA system supports the following types of networking technologies:

  • ‌InfiniBand (IB)

  • Ethernet

‌The networking infrastructure dictates the choice between the two. If a WEKA cluster is connected to both infrastructures, it is possible to connect WEKA clients from both networks to the same cluster.

The WEKA system networking can be configured as performance-optimized or CPU-optimized. In performance-optimized networking, the CPU cores are dedicated to WEKA, and the networking uses DPDK. In CPU-optimized networking, the CPU cores are not dedicated to WEKA, and the networking uses DPDK (when supported by the NIC drivers) or in-kernel (UDP mode).

Performance-optimized networking (DPDK)

For performance-optimized networking, the WEKA system does not use standard kernel-based TCP/IP services but a proprietary infrastructure based on the following:

  • Use DPDK to map the network device in the user space and use it without any context switches and with zero-copy access. This bypassing of the kernel stack eliminates the consumption of kernel resources for networking operations. It applies to backends and clients and lets the WEKA system saturate network links (including, for example, 200 Gbps or 400 Gbps).

  • Implementing a proprietary WEKA protocol over UDP, i.e., the underlying network, may involve routing between subnets or any other networking infrastructure that supports UDP.

The use of DPDK delivers operations with extremely low latency and high throughput. Low latency is achieved by bypassing the kernel and sending and receiving packages directly from the NIC. High throughput is achieved because multiple cores in the same server can work in parallel without a common bottleneck.

Before proceeding, it is important to understand several key terms used in this section, namely DPDK and SR-IOV.

DPDK

Data Plane Development Kit (DPDK) is a set of libraries and network drivers for highly efficient, low-latency packet processing. This is achieved through several techniques, such as kernel TCP/IP bypass, NUMA locality, multi-core processing, and device access via polling to eliminate the performance overhead of interrupt processing. In addition, DPDK ensures transmission reliability, handles retransmission, and controls congestion.

DPDK implementations are available from several sources. OS vendors like Red Hat and Ubuntu provide DPDK implementations through distribution channels. Mellanox OpenFabrics Enterprise Distribution for Linux (Mellanox OFED), a suite of libraries, tools, and drivers supporting Mellanox NICs, offers its own DPDK implementation.

The WEKA system relies on the DPDK implementation provided by Mellanox OFED on servers equipped with Mellanox NICs. For servers equipped with Intel NICs, DPDK support is through the Intel driver for the card.‌

SR-IOV

Single Root I/O Virtualization (SR-IOV) extends the PCI Express (PCIe) specification that enables PCIe virtualization. It allows a PCIe device, such as a network adapter, to appear as multiple PCIe devices or functions.

There are two function categories:

  • Physical Function (PF): PF is a full-fledged PCIe function that can also be configured.

  • Virtual Function (VF): VF is a virtualized instance of the same PCIe device created by sending appropriate commands to the device PF.

Typically, there are many VFs, but only one PF per physical PCIe device. Once a new VF is created, it can be mapped by an object such as a virtual machine, container, or, in the WEKA system, by a 'compute' process.

To take advantage of SR-IOV technology, the software and hardware must be supported. The Linux kernel provides SR-IOV software support. The computer BIOS and the network adapter provide hardware support (by default, SR-IOV is disabled and must be enabled before installing WEKA).

CPU-optimized networking

For CPU-optimized networking, WEKA can yield CPU resources to other applications. That is useful when the extra CPU cores are needed for other purposes. However, the lack of CPU resources dedicated to the WEKA system comes with the expense of reduced overall performance.

DPDK without the core dedication

For CPU-optimized networking, when mounting filesystems using stateless clients, it is possible to use DPDK networking without dedicating cores. This mode is recommended when available and supported by the NIC drivers. The DPDK networking uses RX interrupts instead of dedicating the cores in this mode.

This mode is supported in most NIC drivers. Consult https://doc.dpdk.org/guides/nics/overview.html for compatibility.

AWS (ENA drivers) does not support this mode. Hence, in CPU-optimized networking in AWS, use the UDP mode.

UDP mode

WEKA can also use in-kernel processing and UDP as the transport protocol. This operation mode is commonly referred to as UDP mode.

UDP mode is compatible with older platforms that lack support for kernel offloading technologies (DPDK) or virtualization (SR-IOV) due to its use of in-kernel processing. This includes legacy hardware, such as the Mellanox CX3 family of NICs.

Typical WEKA configuration

Backend servers

In a typical WEKA system configuration, the WEKA backend servers access the network function in two different methods:

  • Standard TCP/UDP network for management and control operations.

  • High-performance network for data-path traffic.

To run both functions on the same physical interface, contact the Customer Success Team.

The high-performance network used to connect all the backend servers must be DPDK-based. This internal WEKA network also requires a separate IP address space. For details, see Network planning and Configure the networking.

The WEKA system maintains a separate ARP database for its IP addresses and virtual functions and does not use the kernel or operating system ARP services.

Clients

While WEKA backend servers must include DPDK and SR-IOV, WEKA clients in application servers have the flexibility to use either DPDK or UDP modes. DPDK mode is the preferred choice for newer, high-performing platforms that support it. UDP mode is available for clients without SR-IOV or DPDK support or when there is no need for low-latency and high-throughput I/O.

Configuration guidelines

  • DPDK backends and clients using NICs supporting shared networking:

    • Require one IP address per client for both management and data plane.

    • SR-IOV enabled is not required.

  • DPDK backends and clients using NICs supporting dedicated networking:

    • IP address for management: One per NIC (configured before WEKA installation).

    • IP address for data plane: One per WEKA core in each server (applied during cluster initialization).

    • Virtual Functions (VFs):

      • Ensure the device supports a maximum number of VFs greater than the number of physical cores on the server.

      • Set the number of VFs to match the cores you intend to dedicate to WEKA.

      • Note that some BIOS configurations may be necessary.

    • SR-IOV: Enabled in BIOS.

  • UDP clients:

    • Use a shared networking IP address for all purposes.

High Availability

To ensure high availability (HA), configure the WEKA system to eliminate any single point of failure (SPOF). This setup requires multiple network switches, with each server connected to both switches for redundancy.

Adhere to the following guidelines to achieve high availability:

  • Single fabric application High availability applies only within a single type of network fabric, such as Ethernet or InfiniBand. You can achieve high availability for each fabric type independently.

  • Mixed-mode cluster requirements In a mixed-mode cluster, all WEKA backends require active connections to both fabric types to participate in cluster operations. Achieving high availability across Ethernet and InfiniBand requires each server to have at least two links of each type.

  • Server link configuration Server high availability is achieved by configuring either two independent network interfaces per server or by using Link Aggregation Control Protocol (LACP) on Ethernet (mode 4). Note: LACP is supported between ports on a single Mellanox NIC and is not supported when using VFs (virtual functions).

  • Non-LACP configuration In non-LACP configurations, the WEKA software uses both network interfaces to enhance availability and increase bandwidth.

  • Failover and load balancing High availability provides failover and failback for both Ethernet and InfiniBand connections, ensuring reliability and load balancing.

  • IP Addressing requirements To configure high availability without LACP, ensure that each backend container has correctly assigned IP addresses for both the management and data planes on each network interface..

  • Traffic optimization To optimize network traffic, label the system to prioritize data paths between servers on the same switch, reducing reliance on inter-switch links (ISL) and other paths.

  • Labeling for congestion reduction Use the label parameter with the weka cluster container net add command to assign switch and port labels, minimizing congestion and enhancing availability.

RDMA and GPUDirect Storage

GPUDirect Storage establishes a direct data path between storage and GPU memory, bypassing unnecessary data copies through the CPU's memory. This approach allows a Direct Memory Access (DMA) engine near the NIC or storage to transfer data directly to or from GPU memory without involving the CPU or GPU.

When RDMA and GPUDirect Storage are enabled, the WEKA system automatically uses the RDMA data path and GPUDirect Storage in supported environments. The system dynamically detects when RDMA is available, both in UDP and DPDK modes, and applies it to workloads that can benefit from RDMA (typically for I/O sizes of 32KB or larger for reads and 256KB or larger for writes).

By leveraging RDMA and GPUDirect Storage, you can achieve enhanced performance. A UDP client, which doesn't require dedicating a core to the WEKA system, can deliver significantly higher performance. Additionally, a DPDK client can experience an extra performance boost, or you can assign fewer cores to the WEKA system while maintaining the same level of performance in DPDK mode.

Requirements and considerations for enabling RDMA and GPUDirect Storage

To enable RDMA and GPUDirect Storage technology, ensure the following requirements are met:

  • Cluster requirements

    • RDMA networking: All servers in the cluster must support RDMA networking.

  • Client requirements

    • GPUDirect Storage: The InfiniBand (IB) interfaces added to the NVIDIA GPUDirect configuration must support RDMA.

    • RDMA: All InfiniBand Host Channel Adapters (HCAs) used by WEKA must support RDMA networking.

  • Encrypted filesystems

    • RDMA and GPUDirect Storage are not utilized for encrypted filesystems. In these cases, the system reverts to standard I/O operations without RDMA or GPUDirect Storage.

  • HCA requirements for RDMA networking An HCA is considered to support RDMA networking if the following conditions are met:

    • For GPUDirect Storage: The network must be InfiniBand. While using an Ethernet network may be possible, this configuration is not supported.

Installation notes

  • GPUDirect Storage: Install the OFED with the --upstream-libs and --dpdk options.

  • Kernel bypass: GPUDirect Storage bypasses the kernel and does not use the page cache. However, standard RDMA clients still use the page cache.

Unsupported configuration

  • Mixed networking clusters: RDMA and GPUDirect Storage are not supported in clusters using a mix of InfiniBand and Ethernet networking.

Verification

  • To verify if RDMA is used, run the weka cluster processes command.

Example:

# weka cluster processes
PROCESS ID  HOSTNAME  CONTAINER   IPS         STATUS  ROLES       NETWORK      CPU  MEMORY   UPTIME
0           weka146   default     10.0.1.146  UP      MANAGEMENT  UDP                        16d 20:07:42h
1           weka146   default     10.0.1.146  UP      FRONTEND    DPDK / RDMA  1    1.47 GB  16d 23:29:00h
2           weka146   default     10.0.3.146  UP      COMPUTE     DPDK / RDMA  12   6.45 GB  16d 23:29:00h
3           weka146   default     10.0.1.146  UP      COMPUTE     DPDK / RDMA  2    6.45 GB  16d 23:29:00h
4           weka146   default     10.0.3.146  UP      COMPUTE     DPDK / RDMA  13   6.45 GB  16d 23:29:00h
5           weka146   default     10.0.1.146  UP      COMPUTE     DPDK / RDMA  3    6.45 GB  16d 22:28:58h
6           weka146   default     10.0.3.146  UP      COMPUTE     DPDK / RDMA  14   6.45 GB  16d 23:29:00h
7           weka146   default     10.0.3.146  UP      DRIVES      DPDK / RDMA  18   1.49 GB  16d 23:29:00h
8           weka146   default     10.0.1.146  UP      DRIVES      DPDK / RDMA  8    1.49 GB  16d 23:29:00h
9           weka146   default     10.0.3.146  UP      DRIVES      DPDK / RDMA  19   1.49 GB  16d 23:29:00h
10          weka146   default     10.0.1.146  UP      DRIVES      DPDK / RDMA  9    1.49 GB  16d 23:29:00h
11          weka146   default     10.0.3.146  UP      DRIVES      DPDK / RDMA  20   1.49 GB  16d 23:29:07h
12          weka147   default     10.0.1.147  UP      MANAGEMENT  UDP                        16d 22:29:02h
13          weka147   default     10.0.1.147  UP      FRONTEND    DPDK / RDMA  1    1.47 GB  16d 23:29:00h
14          weka147   default     10.0.3.147  UP      COMPUTE     DPDK / RDMA  12   6.45 GB  16d 23:29:00h
15          weka147   default     10.0.1.147  UP      COMPUTE     DPDK / RDMA  2    6.45 GB  16d 23:29:00h
16          weka147   default     10.0.3.147  UP      COMPUTE     DPDK / RDMA  13   6.45 GB  16d 23:29:00h
17          weka147   default     10.0.1.147  UP      COMPUTE     DPDK / RDMA  3    6.45 GB  16d 23:29:00h
18          weka147   default     10.0.3.147  UP      COMPUTE     DPDK / RDMA  14   6.45 GB  16d 23:29:00h
19          weka147   default     10.0.3.147  UP      DRIVES      DPDK / RDMA  18   1.49 GB  16d 23:29:00h
20          weka147   default     10.0.1.147  UP      DRIVES      DPDK / RDMA  8    1.49 GB  16d 23:29:00h
21          weka147   default     10.0.3.147  UP      DRIVES      DPDK / RDMA  19   1.49 GB  16d 23:29:07h
22          weka147   default     10.0.1.147  UP      DRIVES      DPDK / RDMA  9    1.49 GB  16d 23:29:00h
23          weka147   default     10.0.3.147  UP      DRIVES      DPDK / RDMA  20   1.49 GB  16d 23:29:07h
. . .

GPUDirect Storage is automatically enabled and detected by the system. To enable or disable RDMA networking for the cluster or a specific client, contact the Customer Success Team.

Last updated