Manually configure the WEKA cluster using the resource generator
Detailed workflow for manually configuring the WEKA cluster using the resource generator in a multi-container backend architecture.
Perform this workflow using the resource generator only if you are not using the automated WMS, WSA, or WEKA Configurator.
The resource generator generates three resource files on each server in the /tmp
directory: drives0.json
, compute0.json
, and frontend0.json
. Then, you create the containers using these generated files of the cluster servers.
Before you begin
Download the resource generator from the GitHub repository to your local server: https://github.com/weka/tools/blob/master/install/resources_generator.py.
Example:
Copy the resource generator from your local server to all servers in the cluster.
Example for a cluster with 8 servers:
To enable execution, change the mode of the resource generator on all servers in the cluster.
Example for a cluster with 8 servers:
Workflow
1. Remove the default container
Command: weka local stop default && weka local rm -f default
Stop and remove the auto-created default container created on each server.
2. Generate the resource files
Command: resources_generator.py
To generate the resource files for the drive, compute, and frontend processes, run the following command on each backend server:
./resources_generator.py --net <net-devices> [options]
The resource generator allocates the number of cores, memory, and other resources according to the values specified in the parameters.
The best practice for resources allocation is as follows:
1 drive core per NVMe device (SSD).
2-3 compute cores per drive core.
1-2 frontend cores if deploying a protocol container. If there is a spare core, it is used for a frontend container.
Minimum of 1 core for the OS.
Example 1: according to the best practice
For a server with 24 cores and 6 SSDs, allocate 6 drive cores and 12 compute cores, and optionally you can use 2 cores of the remaining cores for the frontend container. The OS uses the remaining 4 cores.
Run the following command line:
./resources_generator.py --net eth1 eth2 --drive-dedicated-cores 6 --compute-dedicated-cores 12 --frontend-dedicated-cores 2
Example 2: a server with a limited number of cores
For a server with 14 cores and 6 SSDs, allocate 6 drive cores and 6 compute cores, and optionally you can use 1 core of the remaining cores for the frontend container. The OS uses the remaining 1 core.
Run the following command line:
./resources_generator.py --net eth1 eth2 --drive-dedicated-cores 6 --compute-dedicated-cores 6 --frontend-dedicated-cores 1
Contact Professional Services for the recommended resource allocation settings for your system.
Parameters
Name | Value | Default |
---|---|---|
| Specify the CPUs to allocate for the compute processes. Format: space-separated numbers | |
| Specify the number of cores to dedicate for the compute processes. | The maximum available cores |
| Specify the total memory to allocate for the compute processes. Format: value and unit without a space. Examples: 1024B, 10GiB, 5TiB. | The maximum available memory |
| Specify the CPUs to allocate for the WEKA processes. Format: space-separated numbers. | |
| Specify the CPUs to allocate for the drive processes. Format: space-separated numbers. | |
| Specify the number of cores to dedicate for the drive processes. | 1 core per each detected drive |
| Specify the drives to use. This option overrides automatic detection. Format: space-separated strings. | All unmounted NVME devices |
| Specify the CPUs to allocate for the frontend processes. Format: space-separated numbers. | - |
| Specify the number of cores to dedicate for the frontend processes. | 1 |
| Override the default maximum number of cores per container for IO processes (19). If provided, the new value must be lower. | 19 |
| Set each container's hugepages memory to 1.4 GiB * number of IO processes on the container. | |
| Specify the network devices to use. Format: space-separated strings. | |
| Don't take RDMA support into account when computing memory requirements. | False |
| Override the auto-deduction of the number of cores. | All available cores |
| Specify the path to write the resource files. | '.' |
| Specify the number of cores to leave for OS and non-WEKA processes. | 1 |
| Specify the memory to reserve for non-WEKA requirements. Argument format: a value and unit without a space. Examples: 10GiB, 1024B, 5TiB. | The maximum between 8 GiB and 2% of the total RAM |
| Specify the memory to allocate for compute, frontend, and drive processes. Argument format: a value and unit without a space. Examples: 10GiB, 1024B, 5TiB. | The maximum available memory |
3. Create drive containers
Command: weka local setup container
For each server in the cluster, create the drive containers using the resource generator output file drives0.json
.
The drives JSON file includes all the required values for creating the drive containers. Only the path to the JSON resource file is required (before cluster creation, the optional parameter join-ips
is not relevant).
Parameters
Name | Value |
---|---|
| A valid path to the resource file. |
4. Create a cluster
Command: weka cluster create
To create a cluster of the allocated containers, use the following command:
Parameters
Name | Value | Default |
---|---|---|
| Hostnames or IP addresses. If port 14000 is not the default for the drives, you can specify hostnames:port or ips:port. Minimum cluster size: 6 Format: space-separated strings | |
| IP addresses of the management interfaces. Use a list of | IP of the first network device of the container |
Notes:
It is possible to use a hostname or an IP address. This string serves as the container's identifier in subsequent commands.
If a hostname is used, ensure the hostname to IP resolution mechanism is reliable.
Once the cluster creation is successfully completed, the cluster is in the initialization phase, and some commands can only run in this phase.
To configure high availability (HA), at least two cards must be defined for each container.
On successful completion of the formation of the cluster, every container receives a container-ID. To display the list of the containers and IDs, run
weka cluster container
.In IB installations the
--containers-ips
parameter must specify the IP addresses of the IPoIB interfaces.
5. Configure the SSD drives
Command: weka cluster drive add
To configure the SSD drives on each server in the cluster, or add multiple drive paths, use the following command:
Parameters
Name | Value |
---|---|
| The Identifier of the drive container to add the local SSD drives. |
| List of block devices that identify local SSDs.
It must be a valid Unix network device name.
Format: Space-separated strings.
Example, |
6. Create compute containers
Command: weka local setup container
For each server in the cluster, create the compute containers using the resource generator output file compute0.json
.
Parameters
Name | Value |
---|---|
| A valid path to the resource file. |
| IP:port pairs for the management processes to join the cluster. In the absence of a specified port, the command defaults to using the standard WEKA port 14000. Set the values, only if you want to customize the port.
Format: comma-separated IP addresses.
Example: |
7. Create frontend containers
Command: weka local setup container
For each server in the cluster, create the frontend containers using the resource generator output file frontend0.json
.
Parameters
Name | Value |
---|---|
| A valid path to the resource file. |
| IP:port pairs for the management processes to join the cluster. In the absence of a specified port, the command defaults to using the standard WEKA port 14000. Set the values, only if you want to customize the port.
Format: comma-separated IP addresses.
Example: |
8. Name the cluster
Command: weka cluster update --cluster-name=<cluster name>
What to do next?
Last updated