The planning of a WekaIO system is essential prior to the actual installation process. It involves the planning of the following:
Total SSD net capacity and performance requirements
A WekaIO system cluster runs on a group of hosts with local SSDs. To plan these hosts, the following information must be clarified and defined:
Capacity: Plan your net SSD capacity. Note that data management to object stores can be added after the installation. In the context of the planning stage, only the SSD capacity is required.
Redundancy Scheme: Define the optimal redundancy scheme required for the WekaIO system, as explained in Selecting a Redundancy Scheme.
Failure Domains: Determine whether failure domains are going to be used (this is optional), and if yes determine the number of failure domains and potential number of hosts in each failure domain, as described in Failure Domains, and plan accordingly.
Hot Spare: Define the required hot spare count, as described in Hot Spare.
Once all this data is clarified, you can plan the SSD net storage capacity accordingly, as defined in the SSD Capacity Management formula. You should also have the following information which will be used during the installation process:
Cluster size (number of hosts).
SSD capacity for each host, e.g., 12 hosts with a capacity of 6 TB each.
Planned protection scheme, e.g., 6+2.
Planned failure domains (optional).
Planned hot spare.
SSD resource planning involves how the defined capacity is going to be implemented for the SSDs. For each host, the following has to be determined:
Number of SSDs and capacity for each SSD (where the multiplication of the two should satisfy the required capacity per host).
The technology to be used (NVME, SAS or SATA) and the specific SSD models, which have implications on SSD endurance and performance.
The total per host memory requirements is the sum of the following requirements:
Per Host Memory
6.3 GB for each core
See below. By default 1.4 GB
The per host capacity requirement is calculated with the following formula:
The capacity requirement for the host will be calculated according to the following formula:
Consequently, the overall requirement per host is: 5 + 6 * 6.3 + 7.3 = 50.1 GB
The WekaIO software on a client host requires 5 GB of memory.
The number of physical cores dedicated to the WekaIO software should be planned according to the following guidelines:
At least one physical core should be dedicated to the operating system; the rest can be allocated to the WekaIO software.
Enough cores should be allocated to support the performance targets. For help on planning this, contact the WekaIO Support Team.
Enough memory should be allocated to match core allocation, as discussed above.
In general, it is recommended to allocate as many cores as possible to the WekaIO system, with the following limitations:
There has to be one core for the operation system.
The running of other applications on the same host (converged WekaIO system deployment) is supported. However, this is not covered in this documentation. For further information, contact the WekaIO Support Team.
There has to be sufficient memory, as described above.
No more than 20 physical cores can be assigned to WekaIO system processes.
On a client host, by default the WekaIO software consumes a single physical core. If the client host is configured with hyper-threading, the WekaIO software will consume two logical cores.
If the client networking is defined as based on UDP, there is no allocation of core resources and the CPU resources are allocated to the WekaIO processes by the operating system as any other process.
It is mandatory to determine which one of the two networking technologies - InfiniBand or Ethernet - is to be used in order to proceed to the WekaIO system initialization/installation process.
Client hosts can be configured with networking as above, which provides the highest performance and lowest latency, but requires compatible hardware and dedicated core resources. If a compatible hardware is not available, or if allocating a physical core to the WekaIO system is problematic, the client networking can be configured to use the kernel UDP service. In such cases, performance is reduced and latency increases.