Filesystems, object stores, and filesystem groups
This page describes the three types of entities relevant to data storage in the WEKA system: filesystems, object stores and filesystem groups.
A WEKA filesystem is similar to a regular on-disk filesystem while distributed across all the servers in the cluster. Consequently, filesystems are not associated with any physical object in the Weka system and act as root directories with space limitations.
The system supports a total of up to 1024 filesystems. All of which are equally balanced on all SSDs and CPU cores assigned to the system. This means that allocating a new filesystem or resizing a filesystem are instant management operations performed without constraints.
A filesystem has a defined capacity limit associated with a predefined filesystem group. A filesystem that belongs to a tiered filesystem group must have a total capacity limit and an SSD capacity cap. All filesystems' available SSD capacity cannot exceed the total SSD net capacity.
Thin provisioning is a method of on-demand SSD capacity allocation based on user requirements. In thin provisioning, the filesystem capacity is defined by a minimum guaranteed capacity and a maximum capacity (virtually can be more than the available SSD capacity).
The system allocates more capacity (up to the total available SSD capacity) for users who consume their allocated minimum capacity. Alternatively, when they free up space by deleting files or transferring data, the idle space is reclaimed, repurposed, and used for other workloads that need SSD capacity.
Thin provisioning is beneficial in various use cases:
- Tiered filesystems: On tiered filesystems, available SSD capacity is leveraged for extra performance and released to the object store once needed by other filesystems.
- Auto-scaling groups: When using auto-scaling groups, thin provisioning can help to automatically expand and shrink the filesystem's SSD capacity for extra performance.
- Separation of projects to filesystems: If it is required to create a separate filesystem for each project, and the administrator doesn't expect all filesystems to be fully utilized simultaneously, creating a thin provisioned filesystem for each project is a good solution. Each filesystem is allocated with a minimum capacity but can consume more when needed based on the actual available SSD capacity.
- Number of files or directories: Up to 6.4 trillion (6.4 * 10^12)
- Number of files in a single directory: Up to 6.4 billion (6.4 * 10^9)
- Total capacity with object store: Up to 14 EB
- Total SSD capacity: Up to 512 PB
- File size: Up to 4 PB
WEKA data reduction is a cluster-wide capability that can be enabled per filesystem. It uses block-variable differential compression and advanced de-duplication techniques across all filesystems to reduce the capacity consumed by the user data to provide significant capacity savings to customers.
The compression ratio is workload-dependent and is efficient with text-based data, large-scale unstructured datasets, log analysis, databases, code repositories, and sensor data.
The data reduction applies to user data (not metadata) per filesystem. The data reduction can be enabled only on thin-provision, non-tiered, and unencrypted filesystems on a cluster with a valid Data Efficiency Option (DEO) license.
Data reduction is a post-process activity. New data written to the cluster is written uncompressed. The data reduction process runs as a background task with lower priority than tasks serving user IO requests.
The data reduction starts when enough data is written to the filesystems. It includes the following tasks:
- 1.Ingestion: The data reduction runs two sub-tasks:
- Clusterization: The data reduction is applied on data blocks at the 4K block level. The system looks for similarity across uncompressed data in all the filesystems enabled for data reduction.
- Compression: The system reads the similar and unique blocks and compresses each type separately. Then, the system writes the compressed data to the filesystem.
- 2.Defragmentation: Uncompressed data related to the successful compression operation is marked for deletion. Then, the defrag process waits for sufficient blocks to be invalidated and then deletes them permanently.
Data reduction process at a glance
Both data at rest (residing on SSD and object store) and data in transit can be encrypted. This is achieved by enabling the filesystem encryption feature. A decision on whether a filesystem is to be encrypted is made when creating the filesystem.
To create encrypted filesystems, deploy a Key Management System (KMS).
Note: You can only set the data encryption when creating a filesystem.
In addition to the capacity limitation, each filesystem has a limitation on the amount of metadata. The system-wide metadata limit is determined by the SSD capacity allocated to the WEKA system and the RAM resources allocated to the WEKA system processes.
The WEKA system keeps tracking metadata units in the RAM. If it reaches the RAM limit, it pages these metadata tracking units to the SSD and alerts. This leaves enough time for the administrator to increase system resources, as the system keeps serving IOs with a minimal performance impact.
By default, the metadata limit associated with a filesystem is proportional to the filesystem SSD size. It is possible to override this default by defining a filesystem-specific max-files parameter. The filesystem limit is a logical limit to control the specific filesystem usage and can be updated by the administrator when necessary.
The total metadata limits for all the filesystems can exceed the entire system metadata information that can fit in the RAM. For minimal impact, in such a case, the least-recently-used units are paged to disk, as necessary.
Each metadata unit consumes 4 KB of SSD space (not tiered) and 20 bytes of RAM.
Throughout this documentation, the metadata limitation per filesystem is referred to as the
max-filesparameter, which specifies the number of metadata units (not the number of files). This parameter encapsulates both the file count and the file sizes.
The following table specifies the required metadata units according to the file size. These specifications apply to files residing on SSDs or tiered to object stores.
Number of metadata units
< 0.5 MB
A filesystem with 1 billion files of 64 KB each requires 1 billion metadata units.
0.5 MB - 1 MB
A filesystem with 1 million files of 750 KB each, requires 2 million metadata units.
> 1 MB
2 for the first 1 MB plus 1 per MB for the rest MBs
Each directory requires two metadata units instead of one for a small file.
In the Weka system, object stores represent an optional external storage media, ideal for storing warm data. Object stores used in tiered WEKA system configurations can be cloud-based, located in the same location (local), or at a remote location.
WEKA supports object stores for tiering (tiering and local snapshots) and backup (snapshots only). Both tiering and backup can be used for the same filesystem.
Using object store buckets optimally is achieved when a cost-effective data storage tier is required at a price point that cannot be satisfied by server-based SSDs.
An object store bucket definition contains the object store DNS name, bucket identifier, and access credentials. The bucket must be dedicated to the WEKA system and not be accessible by other applications.
Filesystem connectivity to object store buckets can be used in the data lifecycle management and Snap-to-Object features.
In the WEKA system, filesystems are grouped into a maximum of eight filesystem groups.
Each filesystem group has tiering control parameters. While tiered filesystems have their object store, the tiering policy is the same for each tiered filesystem under the same filesystem group.