W E K A
4.2
4.2
  • WEKA v4.2 documentation
    • Documentation revision history
  • WEKA System Overview
    • Introduction
      • WEKA system functionality features
      • Converged WEKA system deployment
      • Optimize redundancy in WEKA deployments
    • SSD capacity management
    • Filesystems, object stores, and filesystem groups
    • WEKA networking
    • Data lifecycle management
    • WEKA client and mount modes
    • WEKA containers architecture overview
    • Glossary
  • Planning and Installation
    • Prerequisites and compatibility
    • WEKA cluster installation on bare metal servers
      • Plan the WEKA system hardware requirements
      • Obtain the WEKA installation packages
      • Install the WEKA cluster using the WMS with WSA
      • Install the WEKA cluster using the WSA
      • Manually install OS and WEKA on servers
      • Manually prepare the system for WEKA configuration
        • Broadcom adapter setup for WEKA system
        • Enable the SR-IOV
      • Configure the WEKA cluster using the WEKA Configurator
      • Manually configure the WEKA cluster using the resources generator
      • Perform post-configuration procedures
      • Add clients
    • WEKA installation on AWS
      • WEKA installation on AWS using Terraform
        • Terraform-AWS-WEKA module description
        • Deployment on AWS using Terraform
        • Required services and supported regions
        • Supported EC2 instance types using Terraform
        • WEKA cluster auto-scaling in AWS
        • Detailed deployment tutorial: WEKA on AWS using Terraform
      • WEKA installation on AWS using the Cloud Formation
        • Self-service portal
        • CloudFormation template generator
        • Deployment types
        • AWS Outposts deployment
        • Supported EC2 instance types using Cloud Formation
        • Add clients
        • Auto scaling group
        • Troubleshooting
    • WEKA installation on Azure
    • WEKA installation on GCP
      • WEKA project description
      • GCP-WEKA deployment Terraform package description
      • Deployment on GCP using Terraform
      • Required services and supported regions
      • Supported machine types and storage
      • Auto-scale instances in GCP
      • Add clients
      • Troubleshooting
  • Getting Started with WEKA
    • Manage the system using the WEKA GUI
    • Manage the system using the WEKA CLI
      • WEKA CLI hierarchy
      • CLI reference guide
    • Run first IOs with WEKA filesystem
    • Getting started with WEKA REST API
    • WEKA REST API and equivalent CLI commands
  • Performance
    • WEKA performance tests
      • Test environment details
  • WEKA Filesystems & Object Stores
    • Manage object stores
      • Manage object stores using the GUI
      • Manage object stores using the CLI
    • Manage filesystem groups
      • Manage filesystem groups using the GUI
      • Manage filesystem groups using the CLI
    • Manage filesystems
      • Manage filesystems using the GUI
      • Manage filesystems using the CLI
    • Attach or detach object store buckets
      • Attach or detach object store bucket using the GUI
      • Attach or detach object store buckets using the CLI
    • Advanced data lifecycle management
      • Advanced time-based policies for data storage location
      • Data management in tiered filesystems
      • Transition between tiered and SSD-only filesystems
      • Manual fetch and release of data
    • Mount filesystems
      • Mount filesystems from Single Client to Multiple Clusters (SCMC)
      • Manage authentication across multiple clusters with connection profiles
    • Snapshots
      • Manage snapshots using the GUI
      • Manage snapshots using the CLI
    • Snap-To-Object
      • Manage Snap-To-Object using the GUI
      • Manage Snap-To-Object using the CLI
    • Quota management
      • Manage quotas using the GUI
      • Manage quotas using the CLI
  • Additional Protocols
    • Additional protocol containers
    • Manage the NFS protocol
      • Supported NFS client mount parameters
      • Manage NFS networking using the GUI
      • Manage NFS networking using the CLI
    • Manage the S3 protocol
      • S3 cluster management
        • Manage the S3 service using the GUI
        • Manage the S3 service using the CLI
      • S3 buckets management
        • Manage S3 buckets using the GUI
        • Manage S3 buckets using the CLI
      • S3 users and authentication
        • Manage S3 users and authentication using the CLI
        • Manage S3 service accounts using the CLI
      • S3 rules information lifecycle management (ILM)
        • Manage S3 lifecycle rules using the GUI
        • Manage S3 lifecycle rules using the CLI
      • Audit S3 APIs
        • Configure audit webhook using the GUI
        • Configure audit webhook using the CLI
        • Example: How to use Splunk to audit S3
      • S3 supported APIs and limitations
      • S3 examples using boto3
    • Manage the SMB protocol
      • Manage SMB using the GUI
      • Manage SMB using the CLI
  • Operation Guide
    • Alerts
      • Manage alerts using the GUI
      • Manage alerts using the CLI
      • List of alerts and corrective actions
    • Events
      • Manage events using the GUI
      • Manage events using the CLI
      • List of events
    • Statistics
      • Manage statistics using the GUI
      • Manage statistics using the CLI
      • List of statistics
    • Insights
    • System congestion
    • Security management
      • Obtain authentication tokens
      • KMS management
        • Manage KMS using the GUI
        • Manage KMS using the CLI
      • TLS certificate management
        • Manage the TLS certificate using the GUI
        • Manage the TLS certificate using the CLI
      • CA certificate management
        • Manage the CA certificate using the GUI
        • Manage the CA certificate using the CLI
      • Account lockout threshold policy management
        • Manage the account lockout threshold policy using GUI
        • Manage the account lockout threshold policy using CLI
      • Manage the login banner
        • Manage the login banner using the GUI
        • Manage the login banner using the CLI
    • User management
      • Manage users using the GUI
      • Manage users using the CLI
    • Organizations management
      • Manage organizations using the GUI
      • Manage organizations using the CLI
      • Mount authentication for organization filesystems
    • Expand and shrink cluster resources
      • Add a backend server
      • Expand specific resources of a container
      • Shrink a cluster
    • Background tasks
      • Manage background tasks using the GUI
      • Manage background tasks using the CLI
    • Upgrade WEKA versions
  • Billing & Licensing
    • License overview
    • Classic license
  • Monitor the WEKA Cluster
    • Deploy monitoring tools using the WEKA Management Station (WMS)
    • WEKA Home - The WEKA support cloud
      • Local WEKA Home overview
      • Deploy Local WEKA Home v3.0 or higher
      • Deploy Local WEKA Home v2.x
      • Explore cluster insights and statistics
      • Manage alerts and integrations
      • Enforce security and compliance
      • Optimize support and data management
    • Set up the WEKAmon external monitoring
    • Set up the SnapTool external snapshots manager
  • Support
    • Get support for your WEKA system
    • Diagnostics management
      • Traces management
        • Manage traces using the GUI
        • Manage traces using the CLI
      • Protocols debug level management
        • Manage protocols debug level using the GUI
        • Manage protocols debug level using the CLI
      • Diagnostics data management
  • Best Practice Guides
    • WEKA and Slurm integration
      • Avoid conflicting CPU allocations
    • Storage expansion best practice
  • Appendices
    • WEKA CSI Plugin
      • Deployment
      • Storage class configurations
      • Tailor your storage class configuration with mount options
      • Dynamic and static provisioning
      • Launch an application using WEKA as the POD's storage
      • Add SELinux support
      • NFS transport failback
      • Upgrade legacy persistent volumes for capacity enforcement
      • Troubleshooting
    • Convert cluster to multi-container backend
    • Create a client image
Powered by GitBook
On this page
  • 1. Install NIC drivers
  • 2. Enable SR-IOV
  • 3. Configure the networking
  • Ethernet configuration
  • InfiniBand configuration
  • Define the NICs with ignore-carrier
  • 4. Verify the network configuration
  • 5. Configure dual-network links with policy-based routing
  • General Settings in /etc/sysctl.conf
  • RHEL/Rocky/CentOS routing configuration using the Network Scripts
  • RHEL/Rocky 9 routing configuration using the Network Manager
  • Ubuntu Netplan configuration
  • SLES/SUSE configuration
  • 6. Configure the clock synchronization
  • 7. Disable the NUMA balancing
  • 8. Disable swap (if any)
  • 9. Validate the system preparation
  • What to do next?
  1. Planning and Installation
  2. WEKA cluster installation on bare metal servers

Manually prepare the system for WEKA configuration

If the system is not prepared using the WMS, perform this procedure to set the networking and other tasks before configuring the WEKA cluster.

PreviousManually install OS and WEKA on serversNextBroadcom adapter setup for WEKA system

Last updated 5 months ago

Once the hardware and software prerequisites are met, prepare the backend servers and clients for the WEKA system configuration.

This preparation consists of the following steps:

  1. (when required)

Some of the examples contain version-specific information. The software is updated frequently, so the package versions available to you may differ from those presented here.

Related topics

Prerequisites and compatibility

1. Install NIC drivers

  • To install Mellanox OFED, see .

  • To install Broadcom driver, see Broadcom adapter setup for WEKA system.

  • To install Intel driver, see .

2. Enable SR-IOV

Single Root I/O Virtualization (SR-IOV) enablement is mandatory in the following cases:

  • The servers are equipped with Intel NICs.

  • When working with client VMs where it is required to expose the virtual functions (VFs) of a physical NIC to the virtual NICs.

Related topic

Enable the SR-IOV

3. Configure the networking

Ethernet configuration

The following example of the ifcfg script is a reference for configuring the Ethernet interface.

/etc/sysconfig/network-scripts/ifcfg-enp24s0
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="none"
DEFROUTE="no"
IPV4_FAILURE_FATAL="no"
IPV6INIT="no"
IPV6_AUTOCONF="no"
IPV6_DEFROUTE="no"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="enp24s0"
DEVICE="enp24s0"
ONBOOT="yes"
NM_CONTROLLED=no
IPADDR=192.168.1.1
NETMASK=255.255.0.0
MTU=9000

For the best performance, MTU 9000 (jumbo frame) is recommended. For jumbo frame configuration, refer to your switch vendor documentation.

Bring the interface up using the following command:

# ifup enp24s0

InfiniBand configuration

InfiniBand network configuration normally includes Subnet Manager (SM), but the procedure involved is beyond the scope of this document. However, it is important to be aware of the specifics of your SM configuration, such as partitioning and MTU, because they can affect the configuration of the endpoint ports in Linux. For best performance, MTU of 4092 is recommended.

Refer to the following ifcfg script when the IB network only has the default partition, i.e., "no pkey":

/etc/sysconfig/network-scripts/ifcfg-ib1
TYPE=Infiniband
ONBOOT=yes
BOOTPROTO=static
STARTMODE=auto
USERCTL=no
NM_CONTROLLED=no
DEVICE=ib1
IPADDR=192.168.1.1
NETMASK=255.255.0.0
MTU=4092

Bring the interface up using the following command:

# ifup ib1

Verify that the “default partition” connection is up, with all the attributes set:

# ip a s ib1
4: ib1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 4092 qdisc mq state UP group default qlen 256
  link/infiniband 00:00:03:72:fe:80:00:00:00:00:00:00:24:8a:07:03:00:a8:09:48
brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff
    inet 10.0.20.84/24 brd 10.0.20.255 scope global noprefixroute ib0
       valid_lft forever preferred_lft forever

On an InfiniBand network with a non-default partition number, p-key must be configured on the interface if the InfiniBand ports on your network are members of an InfiniBand partition other than the default (0x7FFF). The p-key should associate the port as a full member of the partition (full members are those where the p-key number with the most-significant bit (MSB) of the 16-bits is set to 1).

Example: If the partition number is 0x2, the limited member p-key will equal the p-key itself, i.e.,0x2. The full member p-key will be calculated as the logical OR of 0x8000 and the p-key (0x2) and therefore will be equal to 0x8002.

Note: All InfiniBand ports communicating with the Weka cluster must be full members.

For each pkey-ed IPoIB interface, it's necessary to create two ifcfg scripts. To configure your own pkey-ed IPoIB interface, refer to the following examples, where a pkey of 0x8002 is used. You may need to manually create the child device.

/etc/sysconfig/network-scripts/ifcfg-ib1
TYPE=Infiniband
ONBOOT=yes
MTU=4092
BOOTPROTO=static
STARTMODE=auto
USERCTL=no
NM_CONTROLLED=no
DEVICE=ib1
/etc/sysconfig/network-scripts/ifcfg-ib1.8002
TYPE=Infiniband
BOOTPROTO=none
CONNECTED_MODE=yes
DEVICE=ib1.8002
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
MTU=4092
NAME=ib1.8002
NM_CONTROLLED=no
ONBOOT=yes
PHYSDEV=ib1
PKEY_ID=2
PKEY=yes
BROADCAST=192.168.255.255
NETMASK=255.255.0.0
IPADDR=192.168.1.1

Bring the interface up using the following command:

# ifup ib1.8002

Verify the connection is up with all the non-default partition attributes set:

# ip a s ib1.8002
5: ib1.8002@ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 4092 qdisc mq state UP qlen 256
    link/infiniband 00:00:11:03:fe:80:00:00:00:00:00:00:24:8a:07:03:00:a8:09:48 brd 00:ff:ff:ff:ff:12:40:1b:80:02:00:00:00:00:00:00:ff:ff:ff:ff
    inet 192.168.1.1/16 brd 192.168.255.255 scope global noprefixroute ib1.8002
       valid_lft forever preferred_lft forever

Define the NICs with ignore-carrier

ignore-carrier is a NetworkManager configuration option. When set, it keeps the network interface up even if the physical link is down. It’s useful when services need to bind to the interface address at boot.

The following is an example of configuring ignore-carrier on systems that use NetworkManager on Rocky Linux 8. The exact steps may vary depending on your operating system and its specific network configuration tools. Always refer to your system’s official documentation for accurate information.

  1. Open the /etc/NetworkManager/NetworkManager.conf file to edit it.

  2. Under the [main] section, add one of the following lines depending on the operating system:

    • For some versions of Rocky Linux, RHEL, and CentOS: ignore-carrier=*

    • For some other versions: ignore-carrier=<device-name1>,<device-name2>. Replace <device-name1>,<device-name2> with the actual device names you want to apply this setting to.

Example for RockyLinux and RHEL 8.7:

/etc/NetworkManager/NetworkManager.conf
[main]
ignore-carrier=*

Example for some other versions:

[main]
ignore-carrier=ib0,ib1
  1. Restart the NetworkManager service for the changes to take effect.

4. Verify the network configuration

Use a large-size ICMP ping to check the basic TCP/IP connectivity between the interfaces of the servers:

# ping -M do -s 8972 -c 3 192.168.1.2
PING 192.168.1.2 (192.168.1.2) 8972(9000) bytes of data.
8980 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.063 ms
8980 bytes from 192.168.1.2: icmp_seq=2 ttl=64 time=0.087 ms
8980 bytes from 192.168.1.2: icmp_seq=3 ttl=64 time=0.075 ms

--- 192.168.2.0 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.063/0.075/0.087/0.009 ms

The-M do flag prohibits packet fragmentation, which allows verification of correct MTU configuration between the two endpoints.

-s 8972 is the maximum ICMP packet size that can be transferred with MTU 9000, due to the overhead of ICMP and IP protocols.

5. Configure dual-network links with policy-based routing

The following steps provide guidance for configuring dual-network links with policy-based routing on Linux systems. Adjust IP addresses and interface names according to your environment.

General Settings in /etc/sysctl.conf

  1. Open the /etc/sysctl.conf file using a text editor.

  2. Add the following lines at the end of the file to set minimal configurations per InfiniBand (IB) or Ethernet (Eth) interface:

    # Minimal configuration, set per IB/Eth interface
    net.ipv4.conf.ib0.arp_announce = 2
    net.ipv4.conf.ib1.arp_announce = 2
    net.ipv4.conf.ib0.arp_filter = 1
    net.ipv4.conf.ib1.arp_filter = 1
    net.ipv4.conf.ib0.arp_ignore = 0
    net.ipv4.conf.ib1.arp_ignore = 0
    
    # As an alternative set for all interfaces by default
    net.ipv4.conf.all.arp_filter = 1
    net.ipv4.conf.default.arp_filter = 1
    net.ipv4.conf.all.arp_announce = 2
    net.ipv4.conf.default.arp_announce = 2
    net.ipv4.conf.all.arp_ignore = 0
    net.ipv4.conf.default.arp_ignore = 0
  3. Save the file.

  4. Apply the new settings by running:

    sysctl -p /etc/sysctl.conf

RHEL/Rocky/CentOS routing configuration using the Network Scripts

Network scripts are deprecated in RHEL/Rocky 8. For RHEL/Rocky 9, use the Network Manager.

  1. Navigate to /etc/sysconfig/network-scripts/.

  2. Create the file /etc/sysconfig/network-scripts/route-mlnx0 with the following content:

    10.90.0.0/16 dev mlnx0 src 10.90.0.1 table weka1
    default via 10.90.2.1 dev mlnx0 table weka1
  3. Create the file /etc/sysconfig/network-scripts/route-mlnx1 with the following content:

    10.90.0.0/16 dev mlnx1 src 10.90.1.1 table weka2
    default via 10.90.2.1 dev mlnx1 table weka2
  4. Create the files /etc/sysconfig/network-scripts/rule-mlnx0 and /etc/sysconfig/network-scripts/rule-mlnx1 with the following content:

    table weka1 from 10.90.0.1
    table weka2 from 10.90.1.1
  5. Open /etc/iproute2/rt_tables and add the following lines:

    100 weka1
    101 weka2
  6. Save the changes.

RHEL/Rocky 9 routing configuration using the Network Manager

  • For Ethernet (ETH): To set up routing for Ethernet connections, use the following commands:

nmcli connection modify eth1 ipv4.routes "10.10.10.0/24 src=10.10.10.1 table=100" ipv4.routing-rules "priority 101 from 10.10.10.1 table 100"
nmcli connection modify eth2 ipv4.routes "10.10.10.0/24 src=10.10.10.101 table=200" ipv4.routing-rules "priority 102 from 10.10.10.101 table 200"

The route's first IP address in the provided commands represents the network's subnet to which the NIC is connected. The last address in the routing rules corresponds to the IP address of the NIC being configured, where eth1 is set to 10.10.10.1.

  • For InfiniBand (IB): To configure routing for InfiniBand connections, use the following commands:

nmcli connection modify ib0 ipv4.route-metric 100
nmcli connection modify ib1 ipv4.route-metric 101

nmcli connection modify ib0 ipv4.routes "10.10.10.0/24 src=10.10.10.1 table=100" 
nmcli connection modify ib0 ipv4.routing-rules "priority 101 from 10.10.10.1 table 100"
nmcli connection modify ib1 ipv4.routes "10.10.10.0/24 src=10.10.10.101 table=200" 
nmcli connection modify ib1 ipv4.routing-rules "priority 102 from 10.10.10.101 table 200"
nmcli connection modify ib1 ipv4.route-metric 101

nmcli connection modify ib0 ipv4.routes "10.10.10.0/24 table=100" 
nmcli connection modify ib0 ipv4.routing-rules "priority 101 from 10.10.10.1 table 100"
nmcli connection modify ib1 ipv4.routes "10.10.10.0/24 table=200" 
nmcli connection modify ib1 ipv4.routing-rules "priority 102 from 10.10.10.101 table 200"

The route's first IP address in the above commands signifies the network's subnet associated with the respective NIC. The last address in the routing rules corresponds to the IP address of the NIC being configured, where ib0 is set to 10.10.10.1.

Ubuntu Netplan configuration

  1. Open the Netplan configuration file /etc/netplan/01-netcfg.yaml and adjust it:

network:
    version: 2
    renderer: networkd
    ethernets:
        enp2s0:
            dhcp4: true
            nameservers:
                    addresses: [8.8.8.8]
        ib1:
            addresses:
                    [10.222.0.10/24]
            routes:
                    - to: 10.222.0.0/24
                      via: 10.222.0.10
                      table: 100
            routing-policy:
                    - from: 10.222.0.10
                      table: 100
                      priority: 32764
            ignore-carrier: true
            
        ib2:
            addresses:
                    [10.222.0.20/24]
            routes:
                    - to: 10.222.0.0/24
                      via: 10.222.0.20
                      table: 101
            routing-policy:
                    - from: 10.222.0.20
                      table: 101
                      priority: 32765
            ignore-carrier: true
            
  1. After adjusting the Netplan configuration file, run the following commands:

ip route add 10.222.0.0/24 via 10.222.0.10 dev ib1 table 100
ip route add 10.222.0.0/24 via 10.222.0.20 dev ib2 table 101

SLES/SUSE configuration

  1. Create /etc/sysconfig/network/ifrule-eth2 with:

ipv4 from 192.168.11.21 table 100
  1. Create /etc/sysconfig/network/ifrule-eth4 with:

ipv4 from 192.168.11.31 table 101
  1. Create /etc/sysconfig/network/scripts/ifup-route.eth2 with:

ip route add 192.168.11.0/24 dev eth2 src 192.168.11.21 table weka1
  1. Create /etc/sysconfig/network/scripts/ifup-route.eth4 with:

ip route add 192.168.11.0/24 dev eth4 src 192.168.11.31 table weka2
  1. Add the weka lines to /etc/iproute2/rt_tables:

100 weka1
101 weka2
  1. Restart the interfaces or reboot the machine:

ifdown eth2; ifdown eth4; ifup eth2; ifup eth4

Related topic

6. Configure the clock synchronization

The synchronization of time on computers and networks is considered good practice and is vitally important for the stability of the WEKA system. Proper timestamp alignment in packets and logs is very helpful for the efficient and quick resolution of issues.

Configure the clock synchronization software on the backends and clients according to the specific vendor instructions (see your OS documentation), before installing the WEKA software.

7. Disable the NUMA balancing

The WEKA system autonomously manages NUMA balancing, making optimal decisions. Therefore, turning off the Linux kernel’s NUMA balancing feature is a mandatory requirement to prevent extra latencies in operations. It’s crucial that the disabled NUMA balancing remains consistent and isn’t altered by a server reboot.

To persistently disable NUMA balancing, follow these steps:

  1. Open the file located at: /etc/sysctl.conf

  2. Append the following line: kernel.numa_balancing=disable

8. Disable swap (if any)

WEKA highly recommends that any servers used as backends have no swap configured. This is distribution-dependent but is often a case of commenting out any swap entries in /etc/fstab and rebooting.

9. Validate the system preparation

The wekachecker is a tool that validates the readiness of the servers in the cluster before installing the WEKA software.

The wekachecker performs the following validations:

  • Dataplane IP, jumbo frames, and routing

  • ssh connection to all servers

  • Timesync

  • OS release

  • Sufficient capacity in /opt/weka

  • Available RAM

  • Internet connection availability

  • NTP

  • DNS configuration

  • Firewall rules

  • WEKA required packages

  • OFED required packages

  • Recommended packages

  • HT/AMT is disabled

  • The kernel is supported

  • CPU has a supported AES, and it is enabled

  • Numa balancing is enabled

  • RAM state

  • XFS FS type installed

  • Mellanox OFED is installed

  • IOMMU mode for SSD drives is disabled

  • rpcbind utility is enabled

  • SquashFS is enabled

  • noexec mount option on /tmp

The wekacheckertool applies to all WEKA versions. From V4.0, the following validations are not relevant, although the tool displays them:

  • OS has SELinux disabled or in permissive mode.

  • Network Manager is disabled.

Procedure

  1. From the install directory, run ./wekachecker <hostnames/IPs> Where: The hostnames/IPs is a space-separated list of all the cluster hostnames or IP addresses connected to the high-speed networking. Example: ./wekachecker 10.1.1.11 10.1.1.12 10.1.1.4 10.1.1.5 10.1.1.6 10.1.1.7 10.1.1.8

  2. Review the output. If failures or warnings are reported, investigate them and correct them as necessary. Repeat the validation until no important issues are reported. The wekachecker writes any failures or warnings to the file: test_results.txt.

Once the report has no failures or warnings that must be fixed, you can install the WEKA software.

wekachecker report example
Dataplane IP Jumbo Frames/Routing test                       [PASS]
Check ssh to all hosts                                       [PASS]
Verify timesync                                              [PASS]
Check if OS has SELinux disabled or in permissive mode       [PASS]
Check OS Release...                                          [PASS]
Check /opt/weka for sufficient capacity...                   [WARN]
Check available RAM...                                       [PASS]
Check if internet connection available...                    [PASS]
Check for NTP...                                             [PASS]
Check DNS configuration...                                   [PASS]
Check Firewall rules...                                      [PASS]
Check for WEKA Required Packages...                          [PASS]
Check for OFED Required Packages...                          [PASS]
Check for Recommended Packages...                            [WARN]
Check if HT/AMT is disabled                                  [WARN]
Check if kernel is supported...                              [PASS]
Check if CPU has AES enabled and supported                   [PASS]
Check if Network Manager is disabled                         [WARN]
Checking if Numa balancing is enabled                        [WARN]
Checking RAM state for errors                                [PASS]
Check for XFS FS type installed                              [PASS]
Check if Mellanox OFED is installed                          [PASS]
Check for IOMMU disabled                                     [PASS]
Check for rpcbind enabled                                    [PASS]
Check for squashfs enabled                                   [PASS]
Check for /tmp noexec mount                                  [PASS]

RESULTS: 21 Tests Passed, 0 Failed, 5 Warnings

What to do next?

If you can use the WEKA Configurator, go to:

Configure the WEKA cluster using the WEKA Configurator

Otherwise, go to:

Manually configure the WEKA cluster using the resources generator

Download the wekachecker tarball from and extract it.

https://github.com/weka/tools/blob/master/install/wekachecker
NVIDIA Documentation - Installing Mellanox OFED
Latest Drivers & Software downloads
Install NIC drivers
Enable SR-IOV
Configure the networking
Verify the network configuration
Configure the HA networking
Configure the clock synchronization
Disable the NUMA balancing
Disable swap (if any)
Validate the system preparation
High Availability (HA)