The Data Services container runs tasks in the background, particularly those that can be resource-intensive. At present, it runs the task. In upcoming releases, it will handle additional tasks that consume significant resources.
Running these tasks in the background ensures your CLI remains accessible and responsive without consuming compute resources. This strategy enhances performance, efficiency, and scalability when managing quotas. If a task is interrupted, it automatically resumes, providing reliability.
If the Data Services container is not operational, the quota coloring task reverts to the previous implementation and runs in a single process. This could result in the CLI hanging for an extended period. Therefore, ensuring the Data Services container runs is crucial to prevent this situation.
To improve data service performance, you can set up multiple Data Service containers, one per WEKA server.
After setting up the Data Service container, you can manage it like any other container within the cluster. If there’s a need to adjust its resources, use the weka cluster container resources or weka local resources commands. For more details, see Expand specific resources of a container.
Before you begin
Ensure the server where you’re adding this container has a minimum of 5.5 GB of memory available for the container’s use.
The Data Service containers require a persistent 22 GB filesystem for intermediate global configuration data. Do one of the following:
If a configuration filesystem for the protocol containers exists (typically named .config_fs), use it and expand its size by 22 GB. See
If a configuration filesystem does not exist, create a dedicated 22 GB configuration filesystem for the Data Service containers.
Set the Data Service global configuration. Run the following command:
weka dataservice global-config set --config-fs <configuration filesystem name>
Example:
weka dataservice global-config set --config-fs .config_fs
By default, the Data Service containers share the core of the Management process. However, if you have enough resources, you can assign a separate core to it.
Procedure
Identify the leader IP address: Run the following command:
weka cluster process --leader'. This IP is used in the join-ips parameter
Example
$wekaclusterprocess--leaderPROCESS ID HOSTNAME CONTAINER IPS STATUS RELEASE ROLES NETWORK CPU MEMORY UPTIME LAST FAILURE
60DatSphere-1drives010.108.234.164UP4.3.2MANAGEMENTUDPN/A1:21:05h
Set up the Data Services container: Run the following command:
The Data Services container name. Setdataserv0 to avoid confusion.
only-dataserv-cores*
Creates a Data Services container. This parameter is mandatory.
base-port
If a base-port is not specified, the Data Services container may still initialize as it attempts to allocate an available port range and could succeed. However, for optimal operation, it is recommended to provide the base port externally.
join-ips*
The cluster's leader IP address. This parameter is mandatory to join the Data Services container to the cluster.
management-ips
This is optional. If not provided, it automatically takes the management IP of the server.
memory
Configure the container memory to be allocated for huge pages. It is recommended to set it to 1.5 GB.
Verify the Data Services container is visible in the cluster: Run weka cluster container.
Example
See dataserve0 in the last row (CONTAINER ID 15).
$wekaclustercontainerCONTAINER ID HOSTNAME CONTAINER IPS STATUS RELEASE FAILURE DOMAIN CORES MEMORY UPTIME LAST FAILURE
0DataSphere-0drives010.108.249.241UP4.3.2DOM-00011.54GB1:29:38h1DataSphere-1drives010.108.211.190UP4.3.2DOM-00111.54GB1:29:39h2DataSphere-2drives010.108.47.134UP4.3.2DOM-00211.54GB1:29:39h3DataSphere-3drives010.108.234.164UP4.3.2DOM-00311.54GB1:29:39h4DataSphere-4drives010.108.166.243UP4.3.2DOM-00411.54GB1:29:38h5DataSphere-0compute010.108.249.241UP4.3.2DOM-00011.50GB1:28:56h6DataSphere-1compute010.108.211.190UP4.3.2DOM-00111.50GB1:28:57h7DataSphere-2compute010.108.47.134UP4.3.2DOM-00211.50GB1:28:57h8DataSphere-3compute010.108.234.164UP4.3.2DOM-00311.50GB1:28:57h9DataSphere-4compute010.108.166.243UP4.3.2DOM-00411.50GB1:28:58h10DataSphere-0frontend010.108.249.241UP4.3.2DOM-00011.47GB1:28:13h11DataSphere-1frontend010.108.211.190UP4.3.2DOM-00111.47GB1:28:13h12DataSphere-2frontend010.108.47.134UP4.3.2DOM-00211.47GB1:28:13h13DataSphere-3frontend010.108.234.164UP4.3.2DOM-00311.47GB1:28:14h14DataSphere-4frontend010.108.166.243UP4.3.2DOM-00411.47GB1:28:14h15DataSphere-0dataserv010.108.249.241UP4.3.211.47GB0:07:41h
Verify the data services and management processes have joined the cluster: Run weka cluster process.
Example
See PROCESS IDs 300 and 301.
$wekaclusterprocessPROCESS ID HOSTNAME CONTAINER IPS STATUS RELEASE ROLES NETWORK CPU MEMORY UPTIME LAST FAILURE
0 DataSphere-0 drives0 10.108.249.241 UP 4.3.2 MANAGEMENT UDP N/A 1:22:26h Host joined a new cluster (1 hour ago)
1DataSphere-0drives010.108.6.1UP4.3.2DRIVESDPDK21.54GB1:22:24h20 DataSphere-1 drives0 10.108.211.190 UP 4.3.2 MANAGEMENT UDP N/A 1:22:28h Host joined a new cluster (1 hour ago)
21DataSphere-1drives010.108.18.211UP4.3.2DRIVESDPDK41.54GB1:22:24h40 DataSphere-2 drives0 10.108.47.134 UP 4.3.2 MANAGEMENT UDP N/A 1:22:27h Host joined a new cluster (1 hour ago)
41DataSphere-2drives010.108.0.189UP4.3.2DRIVESDPDK41.54GB1:22:24h60DataSphere-3drives010.108.234.164UP4.3.2MANAGEMENTUDPN/A1:22:29h61DataSphere-3drives010.108.181.42UP4.3.2DRIVESDPDK61.54GB1:22:24h80 DataSphere-4 drives0 10.108.166.243 UP 4.3.2 MANAGEMENT UDP N/A 1:22:26h Host joined a new cluster (1 hour ago)
81DataSphere-4drives010.108.32.208UP4.3.2DRIVESDPDK21.54GB1:22:24h100 DataSphere-0 compute0 10.108.249.241 UP 4.3.2 MANAGEMENT UDP N/A 1:21:52h Configuration snapshot pulled (1 hour ago)
101DataSphere-0compute010.108.150.39UP4.3.2COMPUTEDPDK61.50GB1:21:50h120 DataSphere-1 compute0 10.108.211.190 UP 4.3.2 MANAGEMENT UDP N/A 1:21:52h Configuration snapshot pulled (1 hour ago)
121DataSphere-1compute010.108.162.229UP4.3.2COMPUTEDPDK21.50GB1:21:50h140 DataSphere-2 compute0 10.108.47.134 UP 4.3.2 MANAGEMENT UDP N/A 1:21:46h Removed from cluster: Not reachable by the cluster (1 hour ago)
141DataSphere-2compute010.108.38.178UP4.3.2COMPUTEDPDK21.50GB1:21:50h160 DataSphere-3 compute0 10.108.234.164 UP 4.3.2 MANAGEMENT UDP N/A 1:21:52h Configuration snapshot pulled (1 hour ago)
161DataSphere-3compute010.108.254.134UP4.3.2COMPUTEDPDK41.50GB1:21:50h180 DataSphere-4 compute0 10.108.166.243 UP 4.3.2 MANAGEMENT UDP N/A 1:21:46h Removed from cluster: Not reachable by the cluster (1 hour ago)
181DataSphere-4compute010.108.0.100UP4.3.2COMPUTEDPDK41.50GB1:21:50h200 DataSphere-0 frontend0 10.108.249.241 UP 4.3.2 MANAGEMENT UDP N/A 1:21:01h Removed from cluster: Not reachable by the cluster (1 hour ago)
201DataSphere-0frontend010.108.10.152UP4.3.2FRONTENDDPDK41.47GB1:21:05h220 DataSphere-1 frontend0 10.108.211.190 UP 4.3.2 MANAGEMENT UDP N/A 1:21:01h Removed from cluster: Not reachable by the cluster (1 hour ago)
221DataSphere-1frontend010.108.201.178UP4.3.2FRONTENDDPDK61.47GB1:21:05h240 DataSphere-2 frontend0 10.108.47.134 UP 4.3.2 MANAGEMENT UDP N/A 1:21:01h Removed from cluster: Not reachable by the cluster (1 hour ago)
241DataSphere-2frontend010.108.172.186UP4.3.2FRONTENDDPDK61.47GB1:21:05h260 DataSphere-3 frontend0 10.108.234.164 UP 4.3.2 MANAGEMENT UDP N/A 1:21:08h Configuration snapshot pulled (1 hour ago)
261DataSphere-3frontend010.108.145.253UP4.3.2FRONTENDDPDK21.47GB1:21:05h280 DataSphere-4 frontend0 10.108.166.243 UP 4.3.2 MANAGEMENT UDP N/A 1:21:08h Configuration snapshot pulled (1 hour ago)
281DataSphere-4frontend010.108.219.191UP4.3.2FRONTENDDPDK61.47GB1:21:05h300 DataSphere-0 dataserv0 10.108.249.241 UP 4.3.2 MANAGEMENT UDP N/A 33.05s Configuration snapshot pulled (40 seconds ago)
301DataSphere-0dataserv010.108.249.241UP4.3.2DATASERVUDP11.47GB14.55s