Manually configure the WEKA cluster using the resources generator
Detailed workflow for manually configuring the WEKA cluster using the resource generator in a multi-container backend architecture.
Perform this workflow using the resources generator only if you are not using the automated WMS, WSA, or WEKA Configurator.
The resources generator generates three resource files on each server in the /tmp directory: drives0.json, compute0.json, and frontend0.json. Then, you create the containers using these generated files of the cluster servers.
Before you begin
Download the resources generator from the GitHub repository to your local server: https://github.com/weka/tools/blob/master/install/resources_generator.py.
Example:
wget https://raw.githubusercontent.com/weka/tools/master/install/resources_generator.py
Copy the resources generator from your local server to all servers in the cluster.
Example for a cluster with 8 servers:
for i in {0..7}; do scp resources_generator.py weka0-$i:/tmp/resources_generator.py; done
To enable execution, change the mode of the resources generator on all servers in the cluster.
Example for a cluster with 8 servers:
pdsh -R ssh -w "weka0-[0-7]" 'chmod +x /tmp/resources_generator.py'
Workflow
Remove the default container
Generate the resource files
Create a cluster
Configure the SSD drives
Create compute containers
Create frontend containers
Configure number of data and parity drives
Configure number of hot spares
Name the cluster
1. Remove the default container
Command: weka local stop default && weka local rm -f default
Stop and remove the auto-created default container created on each server.
2. Generate the resource files
Command: resources_generator.py
To generate the resource files for the drive, compute, and frontend processes, run the following command on each backend server:
./resources_generator.py --net <net-devices> [options]
The resources generator allocates the number of cores, memory, and other resources according to the values specified in the parameters.
The best practice for resources allocation is as follows:
1 drive core per NVMe device (SSD).
2-3 compute cores per drive core.
1-2 frontend cores if deploying a protocol container. If there is a spare core, it is used for a frontend container.
Minimum of 1 core for the OS.
Example 1: according to the best practice
For a server with 24 cores and 6 SSDs, allocate 6 drive cores and 12 compute cores, and optionally you can use 2 cores of the remaining cores for the frontend container. The OS uses the remaining 4 cores.
Run the following command line:
./resources_generator.py --net eth1 eth2 --drive-dedicated-cores 6 --compute-dedicated-cores 12 --frontend-dedicated-cores 2
Example 2: a server with a limited number of cores
For a server with 14 cores and 6 SSDs, allocate 6 drive cores and 6 compute cores, and optionally you can use 1 core of the remaining cores for the frontend container. The OS uses the remaining 1 core.
Run the following command line:
./resources_generator.py --net eth1 eth2 --drive-dedicated-cores 6 --compute-dedicated-cores 6 --frontend-dedicated-cores 1
Parameters
compute-core-ids
Specify the CPUs to allocate for the compute processes. Format: space-separated numbers
compute-dedicated-cores
Specify the number of cores to dedicate for the compute processes.
The maximum available cores
compute-memory
Specify the total memory to allocate for the compute processes.
Format: value and unit without a space.
Examples: 1024B, 10GiB, 5TiB.
The maximum available memory
core-ids
Specify the CPUs to allocate for the WEKA processes. Format: space-separated numbers.
drive-core-ids
Specify the CPUs to allocate for the drive processes. Format: space-separated numbers.
drive-dedicated-cores
Specify the number of cores to dedicate for the drive processes.
1 core per each detected drive
drives
Specify the drives to use.
This option overrides automatic detection. Format: space-separated strings.
All unmounted NVME devices
frontend-core-ids
Specify the CPUs to allocate for the frontend processes. Format: space-separated numbers.
-
frontend-dedicated-cores
Specify the number of cores to dedicate for the frontend processes.
1
max-cores-per-container
Override the default maximum number of cores per container for IO processes (19). If provided, the new value must be lower.
19
minimal-memory
Set each container's hugepages memory to 1.4 GiB * number of IO processes on the container.
net*
Specify the network devices to use. Format: space-separated strings.
no-rdma
Don't take RDMA support into account when computing memory requirements.
False
num-cores
Override the auto-deduction of the number of cores.
All available cores
path
Specify the path to write the resource files.
'.'
spare-cores
Specify the number of cores to leave for OS and non-WEKA processes.
1
spare-memory
Specify the memory to reserve for non-WEKA requirements.
Argument format: a value and unit without a space.
Examples: 10GiB, 1024B, 5TiB.
The maximum between 8 GiB and 2% of the total RAM
weka-hugepages-memory
Specify the memory to allocate for compute, frontend, and drive processes.
Argument format: a value and unit without a space.
Examples: 10GiB, 1024B, 5TiB.
The maximum available memory
3. Create a cluster
Explore the two methods to create a new WEKA cluster.
Create the cluster (two-step method)
This method involves two main stages. First, you create the initial drives0 container on each server. Second, you run a single command from one server to form the cluster using those initial containers.
Procedure
On each server that will be part of the cluster, run the
weka local setup containercommand to create the initialdrives0container.Specify the path to the
drives0.jsonresource file.Do not include the
--join-ipsor--clusterizeoptions at this stage.-nsets the container name, in this exampledrives0. Otherwise, it uses the resource filename.
From one of the servers, run the
weka cluster addcommand.Provide the hostnames of the servers and their corresponding management IP addresses.
You must provide at least five servers.
Example
Parameters
hostnames*
(Required) Hostnames or IP addresses, separated by spaces. If port 14000 is not the default for the drives, you can specify hostnames:port or ips:port.
host-ips
IP addresses of the management interfaces, separated by commas.
Use a list of
ip+ipaddress pairs for a high availability (HA) configuration.If the cluster connects to both IB and Ethernet, you can specify up to four management IPs (
ip+ip+ip+ip) for redundancy.Default: IP of the first network device of the container.
Create the cluster (single-step auto-clusterization)
This method uses a single command, run on all servers (for example, with a parallel shell utility), to auto-cluster. Containers start, discover peers using the join IPs, and automatically form the cluster.
Before you begin
You must have a list of at least five management IP addresses from servers that will be part of the new cluster.
Procedure
On each server, run the
weka local setup containercommand.Include the
--join-ipsoption and provide at least five management IPs that will be part of the new cluster.Include the
--clusterizeoption to trigger the auto-clusterization process.For InfiniBand (IB) installations, the
--join-ipsparameter must specify the IP addresses of the IPoIB interfaces.-nsets the container name, in this exampledrives0. Otherwise, it uses the resource filename.
High availability (HA) with auto-clusterization
The --join-ips argument does not support the + notation for defining HA.
To configure HA with auto-clusterization, use the --management-ips argument when creating the container. You can provide multiple specific IPs or multiple network interface names.
Example using network interface names for HA:
4. Configure the SSD drives
Command: weka cluster drive add
To configure the SSD drives on each server in the cluster, or add multiple drive paths, use the following command:
Parameters
container-id*
The Identifier of the drive container to add the local SSD drives.
device-paths*
List of block devices that identify local SSDs.
It must be a valid Unix network device name.
Format: Space-separated strings.
Example, /dev/nvme0n1 /dev/nvme1n1
5. Create compute containers
Command: weka local setup container
For each server in the cluster, create the compute containers using the resources generator output file compute0.json.
Parameters
resources-path*
A valid path to the resource file.
join-ips
IP:port pairs for the management processes to join the cluster. In the absence of a specified port, the command defaults to using the standard WEKA port 14000. Set the values, only if you want to customize the port.
To restrict the client’s operations to only the essential APIs for mounting and unmounting operations, connect to WEKA clusters through TCP base port + 3 (for example, 14003).
The IP:port value must match the value used to create the container.
Format: comma-separated IP addresses.
Example: --join-ips 10.10.10.1,10.10.10.2,10.10.10.3:15000
6. Create frontend containers
Command: weka local setup container
For each server in the cluster, create the frontend containers using the resources generator output file frontend0.json.
Command example for installing a stateful client with restricted privileges
Parameters
resources-path*
A valid path to the resource file.
join-ips
IP:port pairs for the management processes to join the cluster. In the absence of a specified port, the command defaults to using the standard WEKA port 14000. Set the values, only if you want to customize the port.
Format: comma-separated IP addresses.
Example: --join-ips 10.10.10.1,10.10.10.2,10.10.10.3:15000
client
Set the container as a client.
auto-remove-timeout
Specify timeout (in seconds) for automatically removing inactive client containers. Only applicable when used with the --client flag.
restricted
Set a client container with restricted privileges as a regular user regardless of the logged-in role.
7. Configure the number of data and parity drives
Command: weka cluster update --data-drives=<count> --parity-drives=<count>
Example: weka cluster update --data-drives=4 --parity-drives=2
8. Configure the number of hot spares
Command: weka cluster hot-spare <count>
Example: weka cluster hot-spare 1
9. Name the cluster
Command: weka cluster update --cluster-name=<cluster name>
What to do next?
Last updated