Mount filesystems

To use a filesystem via the WEKA filesystem driver, it has to be mounted on one of the cluster servers. This page describes how this is performed.

Overview

There are two methods available for mounting a filesystem in one of the cluster servers:

  1. Using the traditional method (stateful): See below and also refer to Add clients (in Bare Metal Installation) or Add clients (in AWS Installation), where first a client is configured and joins a cluster, after which you run the mount command.

  2. Using the Stateless Clients feature: See Mount filesystems using the stateless clients feature below, which simplifies and improves the management of clients in the cluster and eliminates the Adding Clients process.

If you need to mount a single client to multiple clusters, refer to the Mount filesystems from multiple clusters on a single client topic.

Mount a filesystem using the traditional method

Using the mount command as explained below first requires the installation of the WEKA client, configuring the client, and joining it to a WEKA cluster.

To mount a filesystem on one of the cluster servers, let’s assume the cluster has a filesystem called demo. To add this filesystem to a server, SSH into one of the servers and run the mount command as the root user, as follows:

mkdir -p /mnt/weka/demo
mount -t wekafs demo /mnt/weka/demo

The general structure of the mount command for a WEKA filesystem is as follows:

mount -t wekafs [-o option[,option]...]] <fs-name> <mount-point>

Two options for mounting a filesystem on a cluster client are read cache and write cache. Refer to the descriptions in the links below to understand the differences between these modes:

Mount a filesystem using the stateless client feature

The stateless client feature defers the process of joining the cluster until the mount is performed. They are simplifying and improving the management of clients in the cluster. It removes tedious client management procedures, which is particularly beneficial in AWS installations where clients may join and leave at high frequency.

Furthermore, it unifies all security aspects in the mount command, eliminating the search for separate credentials at cluster join and mount.

To use the stateless client feature, a WEKA agent must be installed. Once complete, you can create and configure mounts with the mount command. You can remove existing mounts from the cluster using the unmount command.

To allow only WEKA authenticated users to mount a filesystem, set the filesystem --auth-required flag to yes. For more information, refer to the Mount authentication for organization filesystems topic.

Assuming the WEKA cluster is using the backend IP of 1.2.3.4, running the following command as root on a client will install the agent:

curl http://1.2.3.4:14000/dist/v1/install | sh

On completion, the agent is installed on the client.

Run the mount command

Command: mount -t wekafs

Use one of the following command lines to invoke the mount command. The delimiter between the server and filesystem can be either :/ or /:

mount -t wekafs -o <options> <backend0>[,<backend1>,...,<backendN>]/<fs> <mount-point>

mount -t wekafs -o <options> <backend0>[,<backend1>,...,<backendN>]:/<fs> <mount-point>

Parameters

NameValue

options

See Additional Mount Options below.

backend

IP/hostname of a backend container. Mandatory.

fs

Filesystem name. Mandatory.

mount-point

Path to mount on the local server. Mandatory.

Mount command options

Each mount option can be passed by an individual -o flag to mount.

For all clients types

OptionDescriptionDefaultRemount Supported

readcache

Set the mount mode to read from the cache. This action automatically turns off the writecache.

No

Yes

writecache

Set the mount mode to write to the cache.

Yes

Yes

forcedirect

Set the mount mode to directly read from and write to storage, avoiding the cache. This action automatically turns off both the writecache and readcache.

Note: Enabling this option could impact performance. Use it carefully. If you’re unsure, contact the Customer Success Team.

No

Yes

dentry_max_age_positive

The time in milliseconds after which the system refreshes the metadata cached entry. This refresh informs the server about metadata changes performed by other servers.

1000

Yes

dentry_max_age_negative

Each time a file or directory lookup fails, the local entry cache creates an entry specifying that the file or directory does not exist. This entry is refreshed after the specified time (number in milliseconds), allowing the server to use files or directories created by other servers.

0

Yes

ro

Mount filesystem as read-only.

No

Yes

rw

Mount filesystem as read-write.

Yes

Yes

inode_bits

The inode size in bits may be required for 32-bit applications. Possible values: 32, 64, or auto

Auto

No

verbose

Write debug logs to the console.

No

Yes

quiet

Don't show any logs to console.

No

Yes

acl

Can be defined per mount.

Setting POSIX ACLs can change the effective group permissions (via the mask permissions). When ACLs are defined but the mount has no ACL, the effective group permissions are granted.

No

No

obs_direct

No

Yes

noatime

Do not update inode access times.

No

Yes

strictatime

Always update inode access times.

No

Yes

relatime

Update inode access times only on modification or change, or if inode has been accessed and relatime_threshold has passed.

Yes

Yes

relatime_threshold

The time (number in seconds) to wait since an inode has been accessed (not modified) before updating the access time.

0 means never update the access time on access only.

This option is relevant only if the relatime is on.

0 (infinite)

Yes

nosuid

Do not take suid/sgid bits into effect.

No

Yes

nodev

Do not interpret character or block special devices.

No

Yes

noexec

Do not allow direct execution of any binaries.

No

Yes

file_create_mask

File creation mask. A numeric (octal) notation of POSIX permissions. Newly created file permissions are masked with the creation mask. For example, if a user creates a file with permissions=777 but the file_create_mask is 770, the file is created with 770 permissions.

First, the umask is taken into account, followed by the file_create_mask and then the force_file_mode.

0777

Yes

directory_create_mask

Directory creation mask. A numeric (octal) notation of POSIX permissions. Newly created directory permissions are masked with the creation mask. For example, if a user creates a directory with permissions=777 but the directory_create_mask is 770, the directory will be created with 770 permissions.

First, the umask is taken into account, followed by the directory_create_mask and then the force_directory_mode.

0777

Yes

force_file_mode

Force file mode. A numeric (octal) notation of POSIX permissions. Newly created file permissions are logically OR'ed with the mode. For example, if a user creates a file with permissions 770 but the force_file_mode is 775, the resulting file is created with mode 775.

First, the umask is taken into account, followed by the file_create_mask and then the force_file_mode.

0

Yes

force_directory_mode

Force directory mode. A numeric (octal) notation of POSIX permissions. Newly created directory permissions are logically OR'ed with the mode. For example, if a user creates a directory with permissions 770 but the force_directory_mode is 775, the resulting directory will be created with mode 775.

First, the umask is taken into account, followed by the directory_create_mask and then the force_directory_mode.

0

Yes

sync_on_close

This option ensures that all data for a file is written to the server when the file is closed. This means that changes made to the file by the client are immediately written to the server's disk upon close, which can provide greater data consistency and reliability. It simulates the open-to-close semantics of NFS when working with writecache mount mode and directory quotas. Enabling this option is essential when applications expect returned write errors at syscall close if the quota is exceeded.

No

Yes

nosync_on_close

This option disables the sync_on_close behavior of file writes. When nosync_on_close is enabled, the client does not wait for the server to confirm that all file data has been written to disk before closing the file. This means that any changes made to the file by the client may not be immediately written to the server's disk when the file is closed. Instead, the changes are buffered in memory and written to disk asynchronously later.

No

Yes

Remount of general options

You can remount using the mount options marked as Remount Supported in the above table (mount -o remount).

When a mount option has been explicitly changed, you must set it again in the remount operation to ensure it retains its value. For example, if you mount with ro, a remount without it changes the mount option to the default rw. If you mount with rw, it is not required to re-specify the mount option because this is the default.

Additional mount options using the stateless clients feature

OptionDescriptionDefaultRemount Supported

memory_mb=<memory_mb>

The memory size in MiB the client can use for hugepages.

1400

Yes

num_cores=<frontend-cores>

The number of frontend cores to allocate for the client.

You can specify <num_cores> or<core> but not both.

If none are specified, the client is configured with 1 core.

If you specify 0 then you must use net=udp.

1

No

core=<core-id>

Specify explicit cores to be used by the WekaFS client. Multiple cores can be specified. Core 0 is not allowed.

No

net=<netdev>[/<ip>/<bits>[/<gateway>]]

This option must be specified for on-premises installation and must not be specified for AWS installations.

For more details, see Advanced network configuration by mount option.

No

bandwidth_mbps=<bandwidth_mbps>

Maximum network bandwidth in Mb/s, which limits the traffic that the container can send.

The bandwidth setting is helpful in deployments like AWS, where the bandwidth is limited but allowed to burst.

auto-select

Yes

remove_after_secs=<secs>

The time in seconds without connectivity, after which the client is removed from the cluster. Minimum value: 60 seconds. 3600 seconds = 1 hour.

3600

Yes

traces_capacity_mb=<size-in-mb>

Traces capacity limit in MB.

Minimum value: 512 MB.

No

reserve_1g_hugepages=<true or false>

Controls the page allocation algorithm to reserve hugepages. Possible values: true: reserves 1 GB false: reserves 2 MB

true

Yes

readahead_kb=<readahead>

The readahead size in KB per mount. A higher readahead is better for sequential reads of large files.

32768

Yes

auth_token_path

The path to the mount authentication token (per mount).

~/.weka/auth-token.json

No

dedicated_mode

Determine whether DPDK networking dedicates a core (full) or not (none). none can only be set when the NIC driver supports it. See DPDK without the core dedication. This option is relevant when using DPDK networking (net=udp is not set). Possible values: full or none

full

No

qos_preferred_throughput_mbps

Preferred requests rate for QoS in megabytes per second.

0 (unlimited)

Yes

qos_max_throughput_mbps

Maximum requests rate for QoS in megabytes per second. This option allows bursting above the specified limit but aims to keep this limit on average. The cluster admin can set the default value. See Set mount option default values.

0 (unlimited)

Yes

qos_max_ops

Maximum number of IO operations a client can perform per second. Set a limit to a client or clients to prevent starvation from the rest of the clients. (Do not set this option for mounting from a backend.)

0 (unlimited)

Yes

connect_timeout_secs

The timeout, in seconds, for establishing a connection to a single server.


10

Yes

response_timeout_secs

The timeout, in seconds, waiting for the response from a single server.

60

Yes

join_timeout_secs

The timeout, in seconds, for the client container to join the Weka cluster.

360

Yes

dpdk_base_memory_mb

The base memory in MB to allocate for DPDK. Set this option when mounting to a WEKA cluster on GCP. Example: -o dpdk_base_memory_mb=16

0

Yes

weka_version

The WEKA client version to run.

The cluster version

No

These parameters, if not stated otherwise, are only effective on the first mount command for each client.

By default, the command selects the optimal core allocation for WEKA. If necessary, multiple core parameters can be used to allocate specific cores to the WekaFS client. For example, mount -t wekafs -o core=2 -o core=4 -o net=ib0 backend-server-0/my_fs /mnt/weka

Example: On-Premise Installations

mount -t wekafs -o num_cores=1 -o net=ib0 backend-server-0/my_fs /mnt/weka

Running this command on a server installed with the Weka agent downloads the appropriate WEKA version from the backend-server-0and creates a WEKA container that allocates a single core and a named network interface (ib0). Then it joins the cluster that backend-server-0 is part of and mounts the filesystem my_fs on /mnt/weka.

mount -t wekafs -o num_cores=0 -o net=udp backend-server-0/my_fs /mnt/weka

Running this command uses UDP mode (usually selected when the use of DPDK is not available).

Example: AWS Installations

mount -t wekafs -o num_cores=2 backend1,backend2,backend3/my_fs /mnt/weka

Running this command on an AWS EC2 instance allocates two cores (multiple-frontends), attaches and configures two ENIs on the new client. The client attempts to rejoin the cluster through all three backends specified in the command line.

For stateless clients, the first mount command installs the weka client software and joins the cluster). Any subsequent mount command, can either use the same syntax or just the traditional/per-mount parameters as defined in Mounting Filesystems since it is not necessary to join a cluster.

It is now possible to access Weka filesystems via the mount-point, e.g., by cd /mnt/weka/ command.

After the execution of anumount command, which unmounts the last Weka filesystem, the client is disconnected from the cluster and will be uninstalled by the agent. Consequently, executing a new mount command requires the specification of the cluster, cores, and networking parameters again.

When running in AWS, the instance IAM role must provide permissions to several AWS APIs (see the IAM role created in template section).

Memory allocation for a client is predefined. To change the memory allocation, contact the Customer Success Team.

Remount of stateless clients options

Mount options marked as Remount Supported in the above table can be remounted (using mount -o remount). When a mount option is not set in the remount operation, it will retain its current value. To set a mount option back to its default value, use the default modifier (e.g., memory_mb=default).

Set mount option default values

The defaults of the mount options qos_max_throughput_mbps and qos_preferred_throughput_mbps have no limit.

The cluster admin can set these default values to meet the organization's requirements, reset them to the initial default values (no limit), or show the existing values.

The mount option defaults are only relevant for new mounts performed and do not influence the existing ones.

Commands:

weka cluster mount-defaults set

weka cluster mount-defaults reset

weka cluster mount-defaults show

To set the mount option default values, run the following command:

weka cluster mount-defaults set [--qos-max-throughput qos-max-throughput] [--qos-preferred-throughput qos-preferred-throughput]

Parameters

OptionDescription

qos_max_throughput

Sets the default value for the qos_max_throughput_mbps option, which is the max requests rate for QoS in megabytes per second.

qos_preferred_throughput

Sets the default value for the qos_preferred_throughput_mbps option, which is the preferred requests rate for QoS in megabytes per second.

Advanced network configuration by mount options

When using a stateless client, it is possible to alter and control many different networking options, such as:

  • Virtual functions

  • IPs

  • Gateway (in case the client is on a different subnet)

  • Physical network devices (for performance and HA)

  • UDP mode

Use -o net=<netdev> mount option with the various modifiers as described below.

<netdev> is either the name, MAC address, or PCI address of the physical network device (can be a bond device) to allocate for the client.

When using wekafs mounts, both clients and backends should use the same type of networking technology (either IB or Ethernet).

IP, subnet, gateway, and virtual functions

For higher performance, the usage of multiple Frontends may be required. When using a NIC other than Mellanox or Intel E810 or mounting a DPDK client on a VM, it is required to use SR-IOV to expose a VF of the physical device to the client. Once exposed, it can be configured via the mount command.

To assign the VF IP addresses or when the client resides in a different subnet and routing is needed in the data network, usenet=<netdev>/[ip]/[bits]/[gateway].

ip, bits, gateway are optional. If these options are not provided, the WEKA system performs one of the following depending on the environment:

  • Cloud environment: The WEKA system deduces the values of the ip, bits, gateway options.

  • On-premises environment: The WEKA system allocates values to the ip, bits, gateway options based on the cluster default network. Failure to set the default network may result in the WEKA cluster failing to allocate an IP address for the client.

    Ensure that the WEKA cluster default data networking is configured prior to running the mount command. For details, see Configure default data networking (optional).

Example: allocate two cores and a single physical network device (intel0)

The following command configures two VFs for the device and assign each one of them to one of the frontend processes. The first container receives a 192.168.1.100 IP address, and the second uses a 192.168.1.101 IP address. Both IPs have 24 network mask bits and a default gateway of 192.168.1.254.

mount -t wekafs -o num_cores=2 -o net=intel0/192.168.1.100+192.168.1.101/24/192.168.1.254 backend1/my_fs /mnt/weka

Multiple physical network devices for performance and HA

For performance or high availability, it is possible to use more than one physical network device.

Using multiple physical network devices for better performance

It's easy to saturate the bandwidth of a single network interface when using WekaFS. For higher throughput, it is possible to leverage multiple network interface cards (NICs). The -o net notation shown in the examples above can be used to pass the names of specific NICs to the WekaFS server driver.

For example, the following command will allocate two cores and two physical network devices for increased throughput:

mount -t wekafs -o num_cores=2 -o net=mlnx0 -o net=mlnx1 backend1/my_fs /mnt/weka

Using multiple physical network devices for HA configuration

Multiple NICs can also be configured to achieve redundancy (for details, see the WEKA networking HA section) and higher throughput for a complete, highly available solution. For that, use more than one physical device as previously described, and also, specify the client management IPs using -o mgmt_ip=<ip>+<ip2> command-line option.

For example, the following command will use two network devices for HA networking and allocate both devices to four Frontend processes on the client. The modifier ha is used here, which stands for using the device on all processes.

mount -t wekafs -o num_cores=4 -o net:ha=mlnx0,net:ha=mlnx1 backend1/my_fs -o mgmt_ip=10.0.0.1+10.0.0.2 /mnt/weka

Advanced mounting options for multiple physical network devices

With multiple Frontend processes (as expressed by -o num_cores), it is possible to control what processes use what NICs. This can be accomplished through the use of special command line modifiers called slots. In WekaFS, slot is synonymous with a process number. Typically, the first WekaFS Frontend process will occupy slot 1, then the second - slot 2 and so on.

Examples of slot notation include s1, s2, s2+1, s1-2, slots1+3, slot1, slots1-4 , where - specifies a range of devices, while + specifies a list. For example, s1-4 implies slots 1, 2, 3, and 4, while s1+4 specifies slots 1 and 4.

For example, in the following command, mlnx0 is bound to the second Frontend process whilemlnx1 to the first one for improved performance.

mount -t wekafs -o num_cores=2 -o net:s2=mlnx0,net:s1=mlnx1 backend1/my_fs /mnt/weka

For example, in the following HA mounting command, two cores (two Frontend processes) and two physical network devices (mlnx0, mlnx1) are allocated. By explicitly specifying s2+1, s1-2 modifiers for network devices, both devices will be used by both Frontend processes. Notation s2+1 stands for the first and second processes, while s1-2 stands for the range of 1 to 2, and are effectively the same.

mount -t wekafs -o num_cores=2 -o net:s2+1=mlnx0,net:s1-2=mlnx1 backend1/my_fs -o mgmt_ip=10.0.0.1+10.0.0.2 /mnt/weka

UDP mode

If DPDK cannot be used, you can use the WEKA filesystem UDP networking mode through the kernel (for details about UDP mode. see the WEKA networking section). Use net=udp in the mount command to set the UDP networking mode, for example:

mount -t wekafs -o net=udp backend-server-0/my_fs /mnt/weka

A client in UDP mode cannot be configured in HA mode. However, the client can still work with a highly available cluster.

Providing multiple IPs in the <mgmt-ip> in UDP mode uses their network interfaces for more bandwidth, which can be useful in RDMA environments rather than using only one NIC.

Mount a filesystem using fstab

Using the fstab (filesystem table) enables automatic remount after reboot. It applies to stateless clients running on an OS that supports systemd, such as RHEL/CentOS 7.2 and up, Ubuntu 16.04 and up, and Amazon Linux 2 LTS.

Before you begin

If the mount point you want to set in the fstab is already mounted, unmount it before setting the fstab file.

Procedure

  1. Remove the /etc/init.d/weka-agent file.

  2. Create a file named weka-agent.service with the following content and save it in /etc/systemd/system.

weka-agent.service
[Unit]
Description=WEKA Agent Service
Wants=network.target network-online.target
After=network.target network-online.target rpcbind.service
Documentation=http://docs.weka.io
Before=remote-fs-pre.target remote-fs.target

[Service]
Type=simple
ExecStart=/usr/bin/weka --agent
Restart=always
WorkingDirectory=/
EnvironmentFile=/etc/environment
# Increase the default a bit in order to allow many simultaneous
# files to be monitored, we might need a lot of fds.
LimitNOFILE=65535

[Install]
RequiredBy=remote-fs-pre.target remote-fs.target
  1. Run the following command:

systemctl daemon-reload; systemctl enable --now weka-agent.service
  1. Create a mount point. Example: mkdir -p /mnt/weka/my_fs

  2. Edit /etc/fstab file.

fstab structure

<backend servers/my_fs> <mount point> <filesystem type> <mount options> <systemd mount options>  0     0

fstab example

backend-0,backend-1,backend-3/my_fs /mnt/weka/my_fs  wekafs  num_cores=1,net=eth1,x-systemd.requires=weka-agent.service,x-systemd.mount-timeout=infinity,_netdev   0       0

fstab structure descriptions

  • Backend servers/my_fs: A comma-separated list of backend servers with the filesystem name

  • Mount point: If the client mounts multiple clusters, specify a unique name for each client container. Example: For two client containers, set container_name=client1 and container_name=client2.

  • Filesystem type: wekafs

  • Mount options:

    • Systemd mount options: x-systemd.requires=weka-agent.service,x-systemd.mount-timeout=infinity,_netdev You can set the mount-timeout based on your preferences, such as 180 seconds. This flexibility allows you to customize the timeout according to your specific system needs.

  1. Mount the the filesystem to test the fstab setting by running the command, for example: mount /mnt/weka/my_fs

  2. To test the fstab implementation, reboot the server. WEKA creates the mounts for the next boot.

The filesystem is mounted automatically after server reboot.

Mount a filesystem using autofs

Procedure:

  1. Install autofs on the server using one of the following commands according to your deployment:

  • On Red Hat or CentOS:

yum install -y autofs
  • On Debian or Ubuntu:

apt-get install -y autofs

2. To create the autofs configuration files for Weka filesystems, do one of the following depending on the client type:

  • For a stateless client, run the following commands (specify the backend names as parameters):

echo "/mnt/weka   /etc/auto.wekafs -fstype=wekafs,num_cores=1,net=<netdevice>" > /etc/auto.master.d/wekafs.autofs
echo "*   <backend-1>,<backend-2>/&" > /etc/auto.wekafs
  • For a stateful client (traditional), run the following commands:

echo "/mnt/weka   /etc/auto.wekafs -fstype=wekafs" > /etc/auto.master.d/wekafs.autofs
echo "*   &" > /etc/auto.wekafs

3. Restart the autofs service:

service autofs restart

4. The configuration is distribution-dependent. Verify that the service is configured to start automatically after restarting the server. Run the following command: systemctl is-enabled autofs. If the output is enabled the service is configured to start automatically.

Example: In Amazon Linux, you can verify that the autofs service is configured to start automatically by running the command chkconfig. If the output is on for the current runlevel (you can check with therunlevel command), autofs is enabled upon restart.

# chkconfig | grep autofs
autofs         0:off 1:off 2:off 3:on 4:on 5:on 6:off

Once you complete this procedure, it is possible to access Weka filesystems using the command cd /mnt/weka/<fs-name>.

Last updated