Consider a scenario of a 100 TB filesystem, 500 TB of object store space and 100 TB of SSD space. If the data Retention Period policy is defined as 1 month and only 10 TB of data are written per month, it will probably be possible to maintain data from the last 10 months on the SSDs. On the other hand, if 200 TB of data are written per month, it will only be possible to maintain data from half of the month on the SSDs. Additionally, there is no guarantee that the data on the SSDs is the data written in the last 2 weeks of the month, which also depends on the Tiering Cue.
Consequently, the data Retention Period policy determines the resolution of the WekaIO system release decisions. If it is set to 1 month and the SSD capacity is sufficient for 10 months of writing, then the first month will be kept on the SSDs.
The Tiering Cue policy defines the period of time to wait before the release of data from the SSD to the object store. It is typically used when it is expected that some of the data being written will be rewritten/modified/deleted in the short term.
The WekaIO system integrates a rolling progress control with three rotating periods of 0, 1 and 2.
Period 0: All data written is tagged as written in the current period.
Period 1: The switch from 0 to 1 is according to the Tiering Cue policy.
Period 2: Starts after the period of time defined in the Tiering Cue, triggering the transfer of data written in period 0 from the SSD to the object store.
Since the WekaIO system is a highly scalable data storage system, data storage policies in tiered WekaIO configurations cannot be based on cluster-wide FIFO methodology, because clusters can contain billions of files. Instead, data retention is managed by timestamping every piece of data, where the timestamp is based on a resolution of intervals which may extend from minutes to weeks. The WekaIO system maintains the interval in which each piece of data was created, accessed or last modified.
Users only specify the data Retention Period and based on this, each interval is one quarter of the data Retention Period. Data written, modified or accessed prior to the last interval is always released, even if SSD space is available.
At any given moment, the WekaIO system releases the filesystem data of a single interval, transferring it from the SSD to the object store. This release process is based on the available SSD capacity. Consequently, if there is sufficient SSD capacity, only data which was modified or written before 7 intervals will be released.
Now consider a situation where the total capacity of the SSD is 100 TB. The situation in the example above will be as follows:
Since the resolution in the WekaIO system is the interval, in the example above the SSD capacity of 100 TB is insufficient for all data written over the defined 35-day Retention Period. Consequently, the oldest, most non-accessed or modified data, has to be released to the object store. In this example, this release operation will have to be performed in the middle of interval 6 and will involve the release of data from interval 0.
This counting of the age of the data in resolutions of 5 days is performed according to 8 different categories. A constantly rolling calculation, the following will occur in the example above:
Data from days 1-30 (January 1-30) will all be on the SSD. Some of it may be tiered to the object store, depending on the defined Tiering Cue.
Data from more than 35 days will be released to the object store.
Data from days 31-35 (January 31-February 4) will be partially on the SSD and partially tiered to the object store. However, there is no control over the order in which data from days 31-35 is released to the object store.
Now consider the following filesystem scenario, where the whole SSD storage capacity of 100 TB is utilized in the first 3 intervals:
When much more data is written and there is insufficient SSD capacity for storage, the data from interval 0 will be released when the 100 TB capacity is reached. This represents a violation of the Retention Period. In such a situation, it is also possible to either increase the SSD capacity or reduce the Retention Period.
The tiering process (the tiering of data from the SSDs to the object stores) is based on when data is created or modified. It is managed similar to the Retention Period, with the data timestamped in intervals. The length of each interval is the size of the user-defined Tiering Cue. The WekaIO system maintains 3 such intervals at any given time, and always tiers the data in the third interval.
Since the tiering process applies to data in the first interval in this example, the data written or modified on January 1 will be tiered to the object store on January 3. Consequently, data will never be tiered before it is at least 1 day old (which is the user-defined Tiering Cue), with the worst case being the tiering of data written at the end of January 1 at the beginning of January 3.
An SSD-only filesystem group can be reconfigured as a tiered one by adding an object store definition. In such a situation, the default is to maintain the filesystem size. In order to increase the filesystem size, the total capacity field can be modified, while the existing SSD capacity remains the same.
If it is not possible to maintain the defined Retention Period or Tiering Cue policies, a TieredFilesystemBreakingPolicy event will occur, and random pieces of data will be released in order to free space on the SSDs. Users are alerted to such a situation through an ObjectStoragePossibleBottleneck event, enabling them to consider either raising the bandwidth or upgrading the object store performance.
Object store and SSD access statistics can be viewed in the WekaIO system GUI or using CLI commands, e.g.,
Once tiering has been defined, a defined data management policy cannot be deleted.
Tiered files are always accessible and should generally be treated as regular files. Moreover, while files may be tiered, their metadata is always maintained on the SSDs. This allows traversing files and directories without worrying how such operations may affect performance.
Sometimes, it may be necessary to access previously-tiered files quickly. In such situations, it is possible to request the WekaIO system to fetch the files back to the SSD without accessing them directly. This is performed using the prefetch command, which can be issued via the
weka fs tier fetchcommand, as follows:
weka fs tier fetch----------------------Description:Fetch object-stored files to SSD storeArguments:path A file or directory path to fetch to SSD storeUsage:weka fs tier fetch <path> [options]Options:--glob=<glob> Glob expression to filter files by. Only matching files will be fetched--dont-recurse Do not recurse into subdirectories-L, --dereference Follow symbolic links-h, --help Display help--help-syntax Display help on the syntax of the switches-H=<hostname>, --host=<hostname> Specify the host. Alternatively, use the $WEKA_HOST env variable-J, --json Format output as JSON
In order to fetch a directory that contains a large number of files, it is recommended to use the
xargs command in a similar manner, as follows:
find -L <directory path> -type f | xargs -r -n512 -P64 weka fs tier fetch -v
In order to ensure that the fetch is effective, the following must be taken into account:
Free SSD Capacity: There has to be sufficient free SSD capacity to retain the filesystems that are to be fetched.
Tiering Policy: The tiering policy may release some of the files back to the object store after they have been fetched, or even during the fetch, if it takes longer than expected. The policy should be long enough to allow for the fetch to complete and the data to be accessed before it is released again.