W E K A
4.4
4.4
  • WEKA v4.4 documentation
    • Documentation revision history
  • WEKA System Overview
    • Introduction
      • WEKA system functionality features
      • Converged WEKA system deployment
    • Cluster capacity and redundancy management
    • Filesystems, object stores, and filesystem groups
    • WEKA networking
    • Data lifecycle management
    • WEKA client and mount modes
    • WEKA containers architecture overview
    • Glossary
  • Planning and Installation
    • Prerequisites and compatibility
    • WEKA cluster installation on bare metal servers
      • Plan the WEKA system hardware requirements
      • Obtain the WEKA installation packages
      • Install the WEKA cluster using the WMS with WSA
      • Install the WEKA cluster using the WSA
      • Manually install OS and WEKA on servers
      • Manually prepare the system for WEKA configuration
        • Broadcom adapter setup for WEKA system
        • Enable the SR-IOV
      • Configure the WEKA cluster using the WEKA Configurator
      • Manually configure the WEKA cluster using the resources generator
        • VLAN tagging in the WEKA system
      • Perform post-configuration procedures
      • Add clients to an on-premises WEKA cluster
    • WEKA Cloud Deployment Manager Web (CDM Web) User Guide
    • WEKA Cloud Deployment Manager Local (CDM Local) User Guide
    • WEKA installation on AWS
      • WEKA installation on AWS using Terraform
        • Terraform-AWS-WEKA module description
        • Deployment on AWS using Terraform
        • Required services and supported regions
        • Supported EC2 instance types using Terraform
        • WEKA cluster auto-scaling in AWS
        • Detailed deployment tutorial: WEKA on AWS using Terraform
      • WEKA installation on AWS using the Cloud Formation
        • Self-service portal
        • CloudFormation template generator
        • Deployment types
        • AWS Outposts deployment
        • Supported EC2 instance types using Cloud Formation
        • Add clients to a WEKA cluster on AWS
        • Auto scaling group
        • Troubleshooting
    • WEKA installation on Azure
      • Azure-WEKA deployment Terraform package description
      • Deployment on Azure using Terraform
      • Required services and supported regions
      • Supported virtual machine types
      • Auto-scale virtual machines in Azure
      • Add clients to a WEKA cluster on Azure
      • Troubleshooting
      • Detailed deployment tutorial: WEKA on Azure using Terraform
    • WEKA installation on GCP
      • WEKA project description
      • GCP-WEKA deployment Terraform package description
      • Deployment on GCP using Terraform
      • Required services and supported regions
      • Supported machine types and storage
      • Auto-scale instances in GCP
      • Add clients to a WEKA cluster on GCP
      • Troubleshooting
      • Detailed deployment tutorial: WEKA on GCP using Terraform
      • Google Kubernetes Engine and WEKA over POSIX deployment
    • WEKA installation on OCI
  • Getting Started with WEKA
    • Manage the system using the WEKA GUI
    • Manage the system using the WEKA CLI
      • WEKA CLI hierarchy
      • CLI reference guide
    • Run first IOs with WEKA filesystem
    • Getting started with WEKA REST API
    • WEKA REST API and equivalent CLI commands
  • Performance
    • WEKA performance tests
      • Test environment details
  • WEKA Filesystems & Object Stores
    • Manage object stores
      • Manage object stores using the GUI
      • Manage object stores using the CLI
    • Manage filesystem groups
      • Manage filesystem groups using the GUI
      • Manage filesystem groups using the CLI
    • Manage filesystems
      • Manage filesystems using the GUI
      • Manage filesystems using the CLI
    • Attach or detach object store buckets
      • Attach or detach object store bucket using the GUI
      • Attach or detach object store buckets using the CLI
    • Advanced data lifecycle management
      • Advanced time-based policies for data storage location
      • Data management in tiered filesystems
      • Transition between tiered and SSD-only filesystems
      • Manual fetch and release of data
    • Mount filesystems
      • Mount filesystems from Single Client to Multiple Clusters (SCMC)
      • Manage authentication across multiple clusters with connection profiles
    • Snapshots
      • Manage snapshots using the GUI
      • Manage snapshots using the CLI
    • Snap-To-Object
      • Manage Snap-To-Object using the GUI
      • Manage Snap-To-Object using the CLI
    • Snapshot policies
      • Manage snapshot policies using the GUI
      • Manage snapshot policies using the CLI
    • Quota management
      • Manage quotas using the GUI
      • Manage quotas using the CLI
  • Additional Protocols
    • Additional protocol containers
    • Manage the NFS protocol
      • Supported NFS client mount parameters
      • Manage NFS networking using the GUI
      • Manage NFS networking using the CLI
    • Manage the S3 protocol
      • S3 cluster management
        • Manage the S3 service using the GUI
        • Manage the S3 service using the CLI
      • S3 buckets management
        • Manage S3 buckets using the GUI
        • Manage S3 buckets using the CLI
      • S3 users and authentication
        • Manage S3 users and authentication using the CLI
        • Manage S3 service accounts using the CLI
      • S3 lifecycle rules management
        • Manage S3 lifecycle rules using the GUI
        • Manage S3 lifecycle rules using the CLI
      • Audit S3 APIs
        • Configure audit webhook using the GUI
        • Configure audit webhook using the CLI
        • Example: How to use Splunk to audit S3
        • Example: How to use S3 audit events for tracking and security
      • S3 supported APIs and limitations
      • S3 examples using boto3
      • Configure and use AWS CLI with WEKA S3 storage
    • Manage the SMB protocol
      • Manage SMB using the GUI
      • Manage SMB using the CLI
  • Security
    • WEKA security overview
    • Obtain authentication tokens
    • Manage token expiration
    • Manage account lockout threshold policy
    • Manage KMS
      • Manage KMS using GUI
      • Manage KMS using CLI
    • Manage TLS certificates
      • Manage TLS certificates using GUI
      • Manage TLS certificates using CLI
    • Manage Cross-Origin Resource Sharing
    • Manage CIDR-based security policies
    • Manage login banner
  • Secure cluster membership with join secret authentication
  • Licensing
    • License overview
    • Classic license
  • Operation Guide
    • Alerts
      • Manage alerts using the GUI
      • Manage alerts using the CLI
      • List of alerts and corrective actions
    • Events
      • Manage events using the GUI
      • Manage events using the CLI
      • Events list
    • Statistics
      • Manage statistics using the GUI
      • Manage statistics using the CLI
      • Statistics list
    • Insights
    • System congestion
    • User management
      • Manage users using the GUI
      • Manage users using the CLI
    • Organizations management
      • Manage organizations using the GUI
      • Manage organizations using the CLI
      • Mount authentication for organization filesystems
    • Expand and shrink cluster resources
      • Add a backend server
      • Expand specific resources of a container
      • Shrink a cluster
    • Background tasks
      • Set up a Data Services container for background tasks
      • Manage background tasks using the GUI
      • Manage background tasks using the CLI
    • Upgrade WEKA versions
    • Manage WEKA drivers
  • Drivers distribution service
  • Monitor the WEKA Cluster
    • Deploy monitoring tools using the WEKA Management Station (WMS)
    • WEKA Home - The WEKA support cloud
      • Local WEKA Home overview
      • Deploy Local WEKA Home v3.0 or higher
      • Deploy Local WEKA Home v2.x
      • Explore cluster insights
      • Explore performance statistics in Grafana
      • Manage alerts and integrations
      • Enforce security and compliance
      • Optimize support and data management
      • Export cluster metrics to Prometheus
    • Set up WEKAmon for external monitoring
    • Set up the SnapTool external snapshots manager
  • Kubernetes
    • Composable clusters for multi-tenancy in Kubernetes
    • WEKA Operator deployment
      • Deploy the WEKA client on Amazon EKS
    • WEKA Operator day-2 operations
  • WEKApod
    • WEKApod Data Platform Appliance overview
    • WEKApod servers overview
    • Rack installation
    • WEKApod initial system setup and configuration
    • WEKApod support process
  • AWS Solutions
    • Amazon SageMaker HyperPod and WEKA Integrations
      • Deploy a new Amazon SageMaker HyperPod cluster with WEKA
      • Add WEKA to an existing Amazon SageMaker HyperPod cluster
    • AWS ParallelCluster and WEKA Integration
  • Azure Solutions
    • Azure CycleCloud for SLURM and WEKA Integration
  • Best Practice Guides
    • WEKA and Slurm integration
      • Avoid conflicting CPU allocations
    • Storage expansion best practice
  • Support
    • Get support for your WEKA system
    • Diagnostics management
      • Traces management
        • Manage traces using the GUI
        • Manage traces using the CLI
      • Protocols debug level management
        • Manage protocols debug level using the GUI
        • Manage protocols debug level using the CLI
      • Diagnostics data management
  • Appendices
    • WEKA CSI Plugin
      • Deployment
      • Storage class configurations
      • Tailor your storage class configuration with mount options
      • Dynamic and static provisioning
      • Launch an application using WEKA as the POD's storage
      • Add SELinux support
      • NFS transport failback
      • Upgrade legacy persistent volumes for capacity enforcement
      • Troubleshooting
    • Convert cluster to multi-container backend
    • Create a client image
    • Update WMS and WSA
    • BIOS tool
Powered by GitBook
On this page
  • Overview
  • Mount a filesystem using the persistent mount mode
  • Mount a filesystem using the stateless mount mode
  • Set a stateless client with restricted operations on an Isolated port
  • Mount with restricted options
  • Install the WEKA agent
  • Run the mount command
  • Example: Mount for a restricted stateless client on an isolated port
  • Mount command options
  • For all clients types
  • Remount of general options
  • Additional mount options using the stateless clients feature
  • Remount options for stateless clients
  • Set mount option default values
  • Monitor active mounts per container
  • Advanced network configuration for stateless clients
  • Configure IP, subnet, gateway, and Virtual Functions (VFs)
  • Multiple physical network devices for performance and high availability
  • Network label configuration for stateless clients
  • UDP mode
  • Mount a filesystem using fstab
  • Mount a filesystem using autofs
  1. WEKA Filesystems & Object Stores

Mount filesystems

Discover the two modes for mounting a filesystem on a cluster server: persistent mount mode (stateful) and stateless mount mode. You can also use fstab or autofs for mounting.

Overview

There are two modes available for mounting a filesystem in a cluster server:

  • Persistent mount mode (stateful): This mode involves configuring a client to join the cluster before running the mount command.

  • Stateless mount mode: This mode simplifies and improves client management by eliminating the need for the Adding Clients process.

If you need to mount filesystems from multiple clusters on a single client, refer to the relevant topic for detailed instructions.

In addition, you can mount a filesystem using fstab or autofs.

Related topics

Mount a filesystem using the persistent mount mode

Mount a filesystem using the stateless mount mode

Mount a filesystem using fstab

Mount a filesystem using autofs

Mount filesystems from Single Client to Multiple Clusters (SCMC)


Mount a filesystem using the persistent mount mode

To mount a WEKA filesystem persistently, follow these steps:

  1. Install the WEKA client: Ensure the WEKA client is installed, configured, and connected to your WEKA cluster. See Add clients to an on-premises WEKA cluster.

  2. Identify the filesystem: Determine the name of the filesystem you want to mount. For this example, we use a filesystem named demo.

  3. Create a mount point: SSH into one of your cluster servers and create a directory to serve as the mount point for the filesystem:

    mkdir -p /mnt/weka/demo
  4. Mount the filesystem: As the root user, run the following command to mount the filesystem:

    mount -t wekafs demo /mnt/weka/demo

General command structure: The general syntax for mounting a WEKA filesystem is:

mount -t wekafs [-o option[,option]...] <fs-name> <mount-point>

Replace <fs-name> with the name of your filesystem and <mount-point> with the directory you created for mounting.

Read and write cache modes: When mounting a filesystem, you can choose between two cache modes: read cache and write cache. Each mode offers distinct advantages depending on your use case. For detailed descriptions of these modes, refer to the following links:


Mount a filesystem using the stateless mount mode

The stateless mount mode simplifies client management by deferring the joining of the cluster until the mount operation is performed. This approach is particularly beneficial in environments like AWS, where clients frequently join and leave the cluster.

Key benefits

  • Simplified client management: Eliminates the need for tedious client management procedures.

  • Unified security: Consolidates all security aspects within the mount command, removing the need to manage separate credentials for cluster join and mount.

Prerequisites

  • WEKA Agent: Ensure the WEKA agent is installed on your client to utilize the stateless mount mode. See Add clients to an on-premises WEKA cluster.

Mount a filesystem

Once the WEKA agent is installed, you can create and configure mounts using the mount command. To mount a filesystem:

  • Create and configure mounts: Use the mount command to create and configure the mounts. See Mount command options.

  • Unmounting: Remove existing mounts from the cluster using the unmount command.

Authentication

To restrict mounting to only WEKA authenticated users, set the --auth-required flag to yes for the filesystem. For more information, refer to Mount authentication for organization filesystems.

Set a stateless client with restricted operations on an Isolated port

To restrict a stateless client's operations to only the essential APIs for mounting and unmounting, connect to WEKA clusters through TCP base port + 3 (for example, 14003). This configuration enables operational segregation between client and backend control plane requests.

Mount with restricted options

When mounting with the restricted option, the logged-in user's privileges are set to regular user privileges, regardless of the user's role.

Install the WEKA agent

To install a WEKA agent on a client, run one of the following commands as root on the client:

  • For a non-restricted client:

curl -k https://hostname:14000/dist/v1/install | sh
  • For a restricted client:

curl -k https://hostname:14003/dist/v1/install | sh

The -k flag instructs the curl command to bypass SSL certificate verification.

After running the appropriate command, the agent is installed on the client.

Run the mount command

Command: mount -t wekafs

Command syntax

Use one of the following command lines to invoke the mount command. The delimiter between the server and filesystem can be either :/ or /:

mount -t wekafs -o <options> <backend0>[,<backend1>,...,<backendN>]/<fs> <mount-point>

mount -t wekafs -o <options> <backend0>[,<backend1>,...,<backendN>]:/<fs> <mount-point>

Example: Mount for a restricted stateless client on an isolated port

mount -t wekafs -o restricted -o <options> <backend0>[,<backend1>,...,<backendN>]/<fs> <mount-point>

This setup ensures that the stateless client operates with restricted privileges, maintaining a secure and controlled environment for mounting and unmounting operations on an isolated port.

Parameters

Name
Value

options

See Additional Mount Options below.

backend

IP/hostname of a backend container. Mandatory.

fs

Filesystem name. Mandatory.

mount-point

Path to mount on the local server. Mandatory.


Mount command options

Each mount option can be passed by an individual -o flag to mount.

For all clients types

Option
Description
Default
Remount Supported

readcache

Set the mount mode to read from the cache. This action automatically turns off the writecache.

Note: The SMB share mount mode is always readcache. Set this option to Yes.

No

Yes

writecache

Set the mount mode to write to the cache.

Yes

Yes

forcedirect

Set the mount mode to directly read from and write to storage, avoiding the cache. This action automatically turns off both the writecache and readcache.

No

Yes

dentry_max_age_positive

The time in milliseconds after which the system refreshes the metadata cached entry. This refresh informs the WEKA client about metadata changes performed by other clients.

1000

Yes

dentry_max_age_negative

Each time a file or directory lookup fails, the local entry cache creates an entry specifying that the file or directory does not exist. This entry is refreshed after the specified time (number in milliseconds), allowing the WEKA client to use files or directories created by other clients.

0

Yes

ro

Mount filesystem as read-only.

No

Yes

rw

Mount filesystem as read-write.

Yes

Yes

inode_bits

The inode size in bits may be required for 32-bit applications. Possible values: 32, 64, or auto

Auto

No

verbose

Write debug logs to the console.

No

Yes

quiet

Don't show any logs to console.

No

Yes

acl

Can be defined per mount.

Setting POSIX ACLs can change the effective group permissions (via the mask permissions). When ACLs are defined but the mount has no ACL, the effective group permissions are granted.

No

No

obs_direct

No

Yes

noatime

Do not update inode access times.

No

Yes

strictatime

Always update inode access times.

No

Yes

relatime

Update inode access times only on modification or change, or if inode has been accessed and relatime_threshold has passed.

Yes

Yes

relatime_threshold

The time (number in seconds) to wait since an inode has been accessed (not modified) before updating the access time.

0 means never update the access time on access only.

This option is relevant only if the relatime is on.

0 (infinite)

Yes

nosuid

Do not take suid/sgid bits into effect.

No

Yes

nodev

Do not interpret character or block special devices.

No

Yes

noexec

Do not allow direct execution of any binaries.

No

Yes

file_create_mask

File creation mask. A numeric (octal) notation of POSIX permissions. Newly created file permissions are masked with the creation mask. For example, if a user creates a file with permissions=777 but the file_create_mask is 770, the file is created with 770 permissions.

First, the umask is taken into account, followed by the file_create_mask and then the force_file_mode.

0777

Yes

directory_create_mask

Directory creation mask. A numeric (octal) notation of POSIX permissions. Newly created directory permissions are masked with the creation mask. For example, if a user creates a directory with permissions=777 but the directory_create_mask is 770, the directory will be created with 770 permissions.

First, the umask is taken into account, followed by the directory_create_mask and then the force_directory_mode.

0777

Yes

force_file_mode

Force file mode. A numeric (octal) notation of POSIX permissions. Newly created file permissions are logically OR'ed with the mode. For example, if a user creates a file with permissions 770 but the force_file_mode is 775, the resulting file is created with mode 775.

First, the umask is taken into account, followed by the file_create_mask and then the force_file_mode.

0

Yes

force_directory_mode

Force directory mode. A numeric (octal) notation of POSIX permissions. Newly created directory permissions are logically OR'ed with the mode. For example, if a user creates a directory with permissions 770 but the force_directory_mode is 775, the resulting directory will be created with mode 775.

First, the umask is taken into account, followed by the directory_create_mask and then the force_directory_mode.

0

Yes

sync_on_close

This option ensures that all data for a file is written to the server when the file is closed. This means that changes made to the file by the client are immediately written to the server's disk upon close, which can provide greater data consistency and reliability. It simulates the open-to-close semantics of NFS when working with writecache mount mode and directory quotas. Enabling this option is essential when applications expect returned write errors at syscall close if the quota is exceeded.

No

Yes

nosync_on_close

This option disables the sync_on_close behavior of file writes. When nosync_on_close is enabled, the client does not wait for the server to confirm that all file data has been written to disk before closing the file. This means that any changes made to the file by the client may not be immediately written to the server's disk when the file is closed. Instead, the changes are buffered in memory and written to disk asynchronously later.

No

Yes

Remount of general options

You can remount using the mount options marked as Remount Supported in the above table (mount -o remount).

When a mount option has been explicitly changed, you must set it again in the remount operation to ensure it retains its value. For example, if you mount with ro, a remount without it changes the mount option to the default rw. If you mount with rw, it is not required to re-specify the mount option because this is the default.

Additional mount options using the stateless clients feature

Option
Description
Default
Remount Supported

memory_mb=<memory_mb>

The memory size in MiB the client can use for hugepages.

1400

Yes

num_cores=<frontend-cores>

Specifies the number of processing cores allocated to handle client network operations.

Valid values:

  • 1 to N (where N is the maximum available cores)

  • 0 (only valid with UDP networking mode)

Notes:

  • Cannot be used with core parameter

  • When using NICs with Virtual Functions, num_cores must match the number of configured network devices (net=)

  • Higher core counts may improve performance for multi-connection workloads

Example: core_num=4 # Allocates 4 cores for client processing

1

Yes

core=<core-id>

Specifies which CPU cores to assign to the WEKA client.

Multiple cores can be specified as a comma-separated list.

Core 0 is reserved for system use and cannot be specified.

Examples:

Restrictions:

  • Core IDs must be unique and available on system

  • Cannot be used with num_cores parameter

  • Core 0 not allowed

Yes

net=<netdev>[/<ip>/<bits>[/<gateway>]]

Specifies network devices for WEKA client connections. Required for on-premises installations.

Format:

  • Single device: -o net=eth1

  • Multiple devices: -o net=eth1 -o net=eth2 -o net=eth3

Important:

  • For NICs with Virtual Functions (VFs), the number of network devices must equal num_cores

  • Supports both physical NICs and virtual functions

  • Must specify at least one network device

Yes

remove_after_secs=<secs>

The time in seconds without connectivity, after which the client is removed from the cluster. Minimum value: 60 seconds. 3600 seconds = 1 hour.

3600

Yes

traces_capacity_mb=<size-in-mb>

Traces capacity limit in MB.

Minimum value: 512 MB.

No

reserve_1g_hugepages=<true or false>

Controls the page allocation algorithm to reserve hugepages. Possible values: true: reserves 1 GB false: reserves 2 MB

true

Yes

readahead_kb=<readahead>

The readahead size in KB per mount. A higher readahead is better for sequential reads of large files.

32768

Yes

auth_token_path

The path to the mount authentication token (per mount).

~/.weka/auth-token.json

No

dedicated_mode

full

No

qos_preferred_throughput_mbps

0 (unlimited)

Yes

qos_max_throughput_mbps

0 (unlimited)

Yes

qos_max_ops

Maximum number of IO operations a client can perform per second. Set a limit to a client or clients to prevent starvation from the rest of the clients. (Do not set this option for mounting from a backend.)

0 (unlimited)

Yes

connect_timeout_secs

The timeout, in seconds, for establishing a connection to a single server.


10

Yes

response_timeout_secs

The timeout, in seconds, waiting for the response from a single server.

60

Yes

join_timeout_secs

The timeout, in seconds, for the client container to join the Weka cluster.

360

Yes

dpdk_base_memory_mb

The base memory in MB to allocate for DPDK. Set this option when mounting to a WEKA cluster on GCP. Example: -o dpdk_base_memory_mb=16

0

Yes

weka_version

The WEKA client version to run.

The cluster version

No

restricted

Restricts a stateless client’s operations to only the essential APIs for mounting and unmounting operations.

No

The additional mount options parameters above are only effective on the first mount command for each client, unless stated otherwise.

By default, the command selects the optimal core allocation for WEKA. If necessary, multiple core parameters can be used to allocate specific cores to the WEKA client. For example, mount -t wekafs -o core=2 -o core=4 -o net=ib0 backend-server-0/my_fs /mnt/weka

Example: On-Premise Installations

mount -t wekafs -o num_cores=1 -o net=ib0 backend-server-0/my_fs /mnt/weka

Running this command on a server installed with the Weka agent downloads the appropriate WEKA version from the backend-server-0and creates a WEKA container that allocates a single core and a named network interface (ib0). Then it joins the cluster that backend-server-0 is part of and mounts the filesystem my_fs on /mnt/weka.

mount -t wekafs -o num_cores=0 -o net=udp backend-server-0/my_fs /mnt/weka

Example: AWS Installations

mount -t wekafs -o num_cores=2 backend1,backend2,backend3/my_fs /mnt/weka

Running this command on an AWS EC2 instance allocates two cores (multiple-frontends), attaches and configures two ENIs on the new client. The client attempts to rejoin the cluster through all three backends specified in the command line.

For stateless clients, the first mount command serves a dual purpose:

  1. It installs the WEKA client software.

  2. It joins the WEKA cluster.

Subsequent mount commands can be simplified, requiring only the persistent or per-mount parameters as defined in the Mount command options. The full cluster configuration is not needed for these additional mounts.

WEKA filesystems can be accessed directly through the mount point. You can navigate to the filesystem using standard directory commands, such as cd /mnt/weka/.

When the final WEKA filesystem is unmounted using the umount command, two key actions occur:

  • The client is automatically disconnected from the cluster.

  • The WEKA client software is uninstalled by the agent.

As a result, initiating a new mount operation requires re-specifying the complete cluster configuration, including cluster details, cores, and networking parameters.

Remount options for stateless clients

Mount options explicitly marked as Remount Supported can be modified using the mount -o remount command. During a remount operation:

  • Unspecified mount options retain their current configuration.

  • To reset a specific option to its default value, use the default modifier.

Example of resetting an option to its default:

  • memory_mb=default restores the default memory configuration.

This approach allows for flexible, granular adjustments to mount parameters without requiring a complete filesystem unmount and remount.

Set mount option default values

Default throughput settings

  • By default, qos_max_throughput_mbps and qos_preferred_throughput_mbps are unset, meaning no throughput limit is enforced.

Cluster administrator capabilities

  • Set custom default values aligned with organizational requirements.

  • Reset to initial unlimited configuration.

  • View current default settings.

Key characteristics

  • QoS settings apply to the frontend process, not individual mounts. All mounts on the same frontend share the same QoS limits.

  • If a client connects to multiple WEKA clusters, each frontend enforces its QoS settings independently.

  • Default value changes only affect new mounts. Existing mounts retain the QoS values they were created with.

Available commands

  • Set defaults: weka cluster mount-defaults set

  • Reset to initial values: weka cluster mount-defaults reset

  • Display current defaults: weka cluster mount-defaults show

Command syntax

weka cluster mount-defaults set [--qos-max-throughput qos-max-throughput] [--qos-preferred-throughput qos-preferred-throughput]

Parameters

Option
Description

qos_max_throughput

Specifies the default maximum request rate for Quality of Service (QoS), in megabytes per second. This is an average-based limit applied at the frontend. The system allows short bursts above this value but aims to maintain the specified limit over time.

qos_preferred_throughput

Specifies the default preferred request rate for Quality of Service (QoS), in megabytes per second. This is a soft target used to guide bandwidth allocation. The system aims to maintain this rate under normal conditions but allows the frontend to exceed it, up to the maximum, when additional resources are available.

Monitor active mounts per container

Tracking the number of active mounts per container is important for troubleshooting, validating mount configurations, and identifying potential issues in the WEKA cluster. It provides visibility into mount activity, helping users and automation tools detect anomalies and ensure expected behavior.

To view the active mount count for a specific container, read the following /proc interface:

/proc/wekafs/<container-name>/interface

Advanced network configuration for stateless clients

Stateless clients allow for customizable network configurations to enhance performance and connectivity. The following parameters can be adjusted:

  • Virtual Functions (VFs)

  • IP addresses

  • Gateway configuration (required if the client is on a different subnet)

  • Physical network devices (for improved performance and high availability)

  • UDP mode

To configure networking, use the -o net=<netdev> mount option with the appropriate modifiers.

Identify <netdev>

<netdev> can be specified using:

  • Network interface name

  • MAC address

  • PCI address of the physical network device

  • Bonded device for redundancy and load balancing

Networking technology compatibility

When using WEKA mounts (wekafs), ensure that clients and backends use the same network type. Supported options include InfiniBand (IB) or Ethernet.

Key considerations

  • The -o net=<netdev> option provides detailed control over network interfaces.

  • Selecting the appropriate configuration helps optimize performance and connectivity.

  • Consistent networking technology is essential for system reliability.

Configure IP, subnet, gateway, and Virtual Functions (VFs)

For improved performance, multiple frontend processes may be required. When using a Network Interface Card (NIC) other than Mellanox or Intel E810, or when deploying a DPDK client on a virtual machine (VM), Single Root I/O Virtualization (SR-IOV) must be used to expose a Virtual Function (VF) of the physical device to the client. Once exposed, the VF can be configured using the mount command.

Assign VF IP addresses and routing

To assign an IP address to a VF or to enable routing when the client is in a different subnet, use the following format:

net=<netdev>/[ip]/[bits]/[gateway]
  • ip, bits, and gateway are optional parameters.

  • If these parameters are not provided, the WEKA system assigns values based on the environment:

    • Cloud environment: The system automatically deduces the IP address, subnet mask, and gateway.

    • On-premises environment: The system assigns values based on the cluster’s default network configuration.

      • If the default network is not set, the WEKA cluster may fail to allocate an IP address for the client.

Example: Configuring VFs on a single physical network device

The following command configures VFs for a specified network device and assigns each VF to a frontend process.

  • The first frontend process is assigned 192.168.1.100.

  • The second frontend process is assigned 192.168.1.101.

  • Both IPs are configured with a 24-bit subnet mask and a default gateway of 192.168.1.254.

mount -t wekafs -o num_cores=2 -o net=intel0/192.168.1.100+192.168.1.101/24/192.168.1.254 backend1/my_fs /mnt/weka

Multiple physical network devices for performance and high availability

Utilizing multiple physical network interface cards (NICs) on a WEKA client can unlock significant gains in data throughput and enhance system resilience. By strategically distributing network traffic across several interfaces, you can overcome single-NIC bottlenecks for demanding applications and ensure continuous data access even if one network path fails.

This section delves into the various methods for configuring and managing multiple NICs with WEKA. It covers how to:

  • Aggregate NICs for increased overall performance.

  • Set up redundant configurations to achieve high availability.

  • Implement advanced NUMA-aware setups for optimal efficiency on multi-socket servers.

  • Use specific mount options, including detailed slot notation, to precisely control how client processes uses the available network interfaces.

The following subsections provide detailed explanations and practical examples for each of these configurations, enabling you to tailor your WEKA client's network setup to your specific performance and availability requirements.

Multiple physical network devices for better performance

Demanding workloads on WEKA can readily saturate the bandwidth of a single network interface. For higher throughput, you can leverage multiple network interface cards (NICs). By using the -o net=<interface> mount option for each desired NIC, you instruct the WEKA client driver to utilize these specific interfaces, potentially distributing the load and increasing overall bandwidth.

For example, the following command allocates two cores and two physical network devices for increased throughput:

mount -t wekafs \
-o num_cores=2 \
-o net=mlnx0 -o net=mlnx1 \
backend1/my_fs /mnt/weka
Multiple physical network devices for high availability configuration

Multiple NICs can also be configured to achieve redundancy and higher throughput for a complete, highly available solution. For that, use more than one physical device as previously described, and also, specify the client management IPs using -o mgmt_ip=<ip1>+<ip2> command-line option.

For example, the following command uses two network devices (mlnx0 and mlnx1) for high availability and allocates both devices to four Frontend processes on the client(because num_cores=4). The modifier ha is used here, which stands for using the device on all processes. Note that in this example, 10.0.0.1 is the IP address of mlnx0 while 10.0.0.2 is the IP address of mlnx1.

mount -t wekafs \
-o num_cores=4 \
-o net:ha=mlnx0,net:ha=mlnx1 \
-o mgmt_ip=10.0.0.1+10.0.0.2 \
backend1/my_fs /mnt/weka
Advanced configuration: NUMA affinity with multiple physical network devices and sockets

For more complex systems, especially those with multiple CPU sockets and NUMA (Non-Uniform Memory Access) nodes, you can achieve higher performance and efficiency by pinning client processes and their network traffic to specific NUMA nodes. This involves assigning cores from a specific NUMA node to WekaFS client processes and then mapping these processes to a network interface card (NIC) physically located on the same NUMA node.

Consider a server with four NUMA nodes and four InfiniBand (IB) network interfaces, where each IB interface is assumed to reside on a different NUMA node. The NUMA configuration of the CPUs is as follows:

  • NUMA node0 CPU(s): 0-63

  • NUMA node1 CPU(s): 64-127

  • NUMA node2 CPU(s): 128-191

  • NUMA node3 CPU(s): 192-255

Let's assume you have four IB interfaces: ib0 (on NUMA node0), ib1 (on NUMA node1), ib2 (on NUMA node2), and ib3 (on NUMA node3). To configure WekaFS for optimal NUMA affinity, you would pin specific cores from each NUMA node to WekaFS frontend processes and then map these groups of processes to their corresponding NUMA-local IB interface. Management IPs must also be specified for high availability.

Example:

The following command configures 16 WekaFS client processes. Four processes are pinned to cores on each of the four NUMA nodes. Each group of four processes is then mapped to its local IB interface.

mount -t wekafs \
-o core=63 -o core=62 -o core=61 -o core=60 \
-o core=127 -o core=126 -o core=125 -o core=124 \
-o core=191 -o core=190 -o core=189 -o core=188 \
-o core=255 -o core=254 -o core=253 -o core=252 \
-o net:s1-4=ib0 \
-o net:s5-8=ib1 \
-o net:s9-12=ib2 \
-o net:s13-16=ib3 \
backend_servers/my_fs /mnt/weka

Explanation of the options in this example:

  • -o core=...: Sixteen specific CPU cores are assigned to WekaFS client processes:

    • Cores 63, 62, 61, 60 are on NUMA node0.

    • Cores 127, 126, 125, 124 are on NUMA node1.

    • Cores 191, 190, 189, 188 are on NUMA node2.

    • Cores 255, 254, 253, 252 are on NUMA node3. This creates 16 frontend processes, with each group of four processes affinitized to a specific NUMA node.

  • -o net:s1-4=ib0, net:s5-8=ib1, net:s9-12=ib2, net:s13-16=ib3: These options use the "multiple NIC slot notation" to map the WekaFS client processes (referred to by "slots") to the specified network interfaces (ib0, ib1, ib2, ib3). In this configuration with 16 frontend processes, the intended mapping is:

    • The first group of four processes (running on cores 63,62,61,60 on NUMA0) uses ib0 (assumed to be on NUMA0).

    • The second group of four processes (running on cores 127,126,125,124 on NUMA1) uses ib1 (assumed to be on NUMA1).

    • The third group of four processes (running on cores 191,190,189,188 on NUMA2) uses ib2 (assumed to be on NUMA2).

    • The fourth group of four processes (running on cores 255,254,253,252 on NUMA3) uses ib3 (assumed to be on NUMA3). This setup ensures that network traffic for processes on a given NUMA node utilizes the NIC local to that NUMA node, minimizing cross-NUMA data transfers and potentially improving performance.

  • backend_servers/my_fs: Replace with your WekaFS backend server address(es) and filesystem name.

  • /mnt/weka: Replace with your desired mount point.

This type of granular configuration is beneficial for maximizing throughput and minimizing latency in high-performance computing (HPC) and AI workloads that are sensitive to NUMA effects.

Advanced mounting options for multiple physical network devices

With multiple Frontend processes (as expressed by -o num_cores=X), it is possible to control what processes use what NICs. This can be accomplished through the use of special command line modifiers called slots. In WEKA, slot is synonymous with a process number. Typically, the first WEKA Frontend process will occupy slot 1, then the second - slot 2 and so on.

Examples of slot notation include s1, s2, s2+1, s1-2, slots1+3, slot1, slots1-4 , where - specifies a range of devices, while + specifies a list. For example, s1-4 implies slots 1, 2, 3, and 4, while s1+4 specifies slots 1 and 4.

For example, in the following command, mlnx0 is bound to the second Frontend process while mlnx1 to the first one for improved performance.

mount -t wekafs \
-o num_cores=2 -o net:s2=mlnx0,net:s1=mlnx1 \
backend1/my_fs /mnt/weka

For example, in the following mounting command, two cores (two Frontend processes) and two physical network devices (mlnx0, mlnx1) are allocated. By explicitly specifying s2+1, s1-2 modifiers for network devices, both devices will be used by both Frontend processes. Notation s2+1 stands for the first and second processes, while s1-2 stands for the range of 1 to 2, and are effectively the same.

mount -t wekafs \
-o num_cores=2 \
-o net:s2+1=mlnx0,net:s1-2=mlnx1 \
backend1/my_fs \
-o mgmt_ip=10.0.0.1+10.0.0.2 /mnt/weka

Network label configuration for stateless clients

In environments with stateless clients and high-availability backend networks, configuring network labels is essential for optimizing data path locality and minimizing inter-switch traffic.

Stateless clients, which typically lack persistent state or configuration storage, often connect to a single top-of-rack switch. In contrast, backend servers are usually dual-connected across multiple switches to ensure high availability. In topologies where these switches are interconnected via inter-switch links (ISLs), traffic between nodes may traverse these ISLs unnecessarily if peer selection is left to default behavior. This can introduce additional latency and consume limited east-west bandwidth.

To influence peer selection and ensure efficient traffic routing, stateless clients can use network labels. These labels bind the client’s traffic to a specific network segment or switch, helping ensure that peering remains within the local switch when possible.

Use case

This configuration is especially beneficial in:

  • Two-switch topologies with ISL connections.

  • Deployments where backend nodes are dual-attached and clients are single-attached.

  • Scenarios requiring controlled peering to reduce east-west traffic.

Configuration

To assign a network label, use the -o net mount option in the following format:

mount -t wekafs -o net=<device>/label@<label> <filesystem> <mountpoint>

Parameters:

  • <device>: The name of the client’s network interface (for example, eth0).

  • <label>: The label that corresponds to the client’s network attachment point.

  • <filesystem>: The WEKA filesystem to mount.

  • <mountpoint>: The local directory where the filesystem will be mounted.

Example:

mount -t wekafs -o net=eth0/label@datacenter-a  project-fs1/data

In this example:

  • The client uses the eth0 interface.

  • The label datacenter-a indicates the switch or network zone the interface is connected to.

  • The project-fs1 WEKA filesystem is mounted at /data.

By using a label that reflects the client’s physical or logical network location, the system can make more informed decisions about peering and data path selection, reducing cross-switch communication and improving overall performance.

Remount support

The network label configuration using the -o net option is also supported during remount operations. This allows administrators to change the network label dynamically without needing to fully unmount and remount the filesystem. For example:

mount -o remount,net=eth0/label@datacenter-b /data

In this scenario, the client updates the network label to datacenter-b for the existing mount at /data. This flexibility is useful when network topology or client attachment changes, allowing adjustments to peering behavior with minimal disruption.

Related topic

UDP mode

If DPDK cannot be used, you can use the WEKA filesystem UDP networking mode through the kernel. Use net=udp in the mount command to set the UDP networking mode, for example:

mount -t wekafs -o net=udp backend-server-0/my_fs /mnt/weka

A client in UDP mode cannot be configured in high availability mode (ha). However, the client can still work with a highly available cluster.

Providing multiple IPs in the <mgmt-ip> in UDP mode uses their network interfaces for more bandwidth, which can be useful in RDMA environments rather than using only one NIC.

Related topic


Mount a filesystem using fstab

Using the fstab (filesystem table) enables automatic remount after a reboot. This applies to stateless clients running on an OS that supports systemd, such as RHEL/CentOS 7.2 and up, Ubuntu 16.04 and up, and Amazon Linux 2 LTS.

Before you begin

  • If the mount point you want to set in the fstab is already mounted, unmount it before setting the fstab file.

Procedure

  1. Create a mount point: Run the following command to create a mount point:

mkdir -p /mnt/weka/my_fs  
  1. Edit the /etc/fstab file: Add the entry for the WEKA filesystem.

fstab structure

<backend servers/my_fs> <mount point> <filesystem type> <mount options> <systemd mount options> 0 0  

Example

backend-0,backend-1,backend-3/my_fs /mnt/weka/my_fs wekafs num_cores=1,net=eth1,x-systemd.after=weka-agent.service,x-systemd.mount-timeout=infinity,_netdev 0 0  

fstab configuration parameters

Parameter
Description

Backend servers/my_fs

Comma-separated list of backend servers with the filesystem name.

Mount point

If mounting multiple clusters, specify a unique name.

For two client containers, set container_name=client1 and container_name=client2.

Filesystem type

Must be wekafs.

Systemd mount options

  • x-systemd.after=weka-agent.service

  • x-systemd.mount-timeout=infinity

  • _netdev

Adjust the mount-timeout to your preference, for example, 180 seconds.

Mount options

  1. Mount the filesystem: Test the fstab setting by running:

mount /mnt/weka/my_fs  
  1. Reboot the server: Reboot the server to apply the fstab settings. The filesystem is automatically mounted after the reboot.


Mount a filesystem using autofs

Autofs allows filesystems to be mounted dynamically when accessed and unmounted after a period of inactivity. This approach reduces system overhead and ensures efficient resource utilization. Follow these steps to configure autofs for mounting Weka filesystems.

Procedure

  1. Install autofs on the server: Install the autofs package based on your operating system:

    • For Red Hat or CentOS:

      yum install -y autofs
    • For Debian or Ubuntu:

      apt-get install -y autofs
  2. Configure autofs for WEKA filesystems: Set up the autofs configuration files according to the client type:

    • Stateless client: Run the following commands, replacing <backend-1>, <backend-2>, and <netdevice> with appropriate values:

      echo "/mnt/weka /etc/auto.wekafs -fstype=wekafs,num_cores=1,net=<netdevice>" > /etc/auto.master.d/wekafs.autofs
      echo "* <backend-1>,<backend-2>/&" > /etc/auto.wekafs
    • Persistent client: Run the following commands:

      echo "/mnt/weka /etc/auto.wekafs -fstype=wekafs" > /etc/auto.master.d/wekafs.autofs
      echo "* &" > /etc/auto.wekafs
  3. Restart the autofs service: Apply the changes by restarting the autofs service:

    service autofs restart
  4. Ensure autofs starts automatically on reboot: Verify that autofs is configured to start on reboot:

    systemctl is-enabled autofs
    • If the output is enabled, no further action is required.

    For Amazon Linux: Use chkconfig to confirm autofs is enabled for the current runlevel:

    chkconfig | grep autofs

    Ensure the output indicates on for the active runlevel. Example output:

    autofs 0:off 1:off 2:off 3:on 4:on 5:on 6:off
  5. Access the WEKA filesystem: Navigate to the mount point to access the WEKA filesystem. Replace <fs-name> with the desired filesystem name:

    cd /mnt/weka/<fs-name>
  • Adjust backend and network device configurations as needed for your deployment.

  • Review distribution-specific documentation for additional configuration options.

PreviousManual fetch and release of dataNextMount filesystems from Single Client to Multiple Clusters (SCMC)

Last updated 1 day ago

Note: Enabling this option can impact performance. Use it carefully. If you’re unsure, contact the . Do not use this option for SMB shares.

See

For additional options, see

Determine whether DPDK networking dedicates a core (full) or not (none). none can only be set when the NIC driver supports it. See . This option is relevant when using DPDK networking (net=udp is not set). Possible values: full or none

Specifies the preferred request rate for Quality of Service (QoS), in megabytes per second. This is a soft target used to guide bandwidth allocation. The system aims to maintain this rate under normal conditions but allows the frontend to exceed it, up to the maximum, when additional resources are available. The cluster admin can set the default value. See .

Specifies the maximum request rate for Quality of Service (QoS), in megabytes per second. This is an average-based limit applied at the front end. The system allows short bursts above this value but aims to maintain the specified limit over time. The cluster admin can set the default value. See .

Running this command uses (usually selected when the use of DPDK is not available).

When running in AWS, the instance IAM role must provide permissions to several AWS APIs (see the section).

Memory allocation for a client is predefined. To change the memory allocation, contact the .

Important: Ensure that the WEKA cluster default data networking is configured before executing the mount command. For configuration details, see .

(in the WEKA Networking topic)

See

-o core=1   # Single core

-o core=1 -o core=3 -o core=5    # Multiple cores
Advanced network configuration for stateless clients
Additional mount options using the stateless clients feature
6. Configure default data networking (optional)
Network High Availability
UDP mode
Set mount option default values
Set mount option default value
Read cache mount mode
Write cache mount mode
UDP mode
DPDK without the core dedication
IAM role created in template
Direct object store mount option
Customer Success Team
Customer Success Team