Manage Snap-To-Object using the CLI
The Snap-To-Object feature enables the committing of all the data of a specific snapshot to an object store.
Using the CLI, you can:
Upload a snapshot
Command: weka fs snapshot upload
Use the following command line to upload an existing snapshot:
weka fs snapshot upload <file-system> <snapshot> [--site site]
Parameters
file-system
*
Name of the filesystem
snapshot
*
Name of the snapshot of the <file-system>
filesystem to upload.
site
*
Location for the snapshot upload.
Mandatory only if both local
and remote
buckets are attached.
Possible values: local
or remote
Auto-selected if only one bucket for upload is attached.
Create a filesystem from an uploaded snapshot
Command: weka fs download
Use the following command to create or recreate a filesystem from an existing snapshot. If the snapshot originates from an encrypted source, be sure to include the required KMS-related parameters:
weka fs download <name> <group-name> <total-capacity> <ssd-capacity> <obs-bucket> <locator>
[--auth-required auth-required] [--additional-obs additional-obs] [--snapshot-name snapshot-name] [--access-point access-point] [--kms-key-identifier kms-key-identifier] [--kms-namespace kms-namespace] [--kms-role-id kms-role-id] [--kms-secret-id kms-secret-id] [--skip-resource-validation]
When creating a filesystem from a snapshot, a background cluster task automatically prefetches its metadata, providing better latency for metadata queries.
Parameters
name
*
Name of the filesystem to create.
group-name
*
Name of the filesystem group in which the new filesystem is placed.
total-capacity
*
The total capacity of the downloaded filesystem.
ssd-capacity
*
SSD capacity of the downloaded filesystem.
obs-bucket
*
Object store name for tiering.
locator
*
Object store locator obtained from a previously successful snapshot upload.
auth-required
Require authentication for the mounting user when mounting this filesystem. This setting is only applicable in the root organization; users in non-root organizations must always be authenticated to perform a mount operation. Format: yes
or no
.
no
additional-obs
An additional object-store name.
If the data to recover reside in two object stores (a second object store attached to the filesystem, and the filesystem has not undergone full migration), this object store is attached in a read-only
mode.
The snapshot locator must be in the primary object store specified in the obs
parameter.
snapshot-name
The downloaded snapshot name.
The uploaded snapshot name.
access-point
The downloaded snapshot access point.
The uploaded access point.
kms-key-identifier
Customize KMS key name for this filesystem (applicable only for HashiCorp Vault).
kms-namespace
Customize the KMS role ID for this filesystem (applicable only for HashiCorp Vault).
kms-role-id
Customize the KMS role ID for this filesystem (applicable only for HashiCorp Vault).
kms-secret-id
Customize the KMS secret ID for this filesystem (applicable only for HashiCorp Vault).
skip-resource-validation
Skip verifying RAM and SSD resource allocation for the downloaded filesystem on the cluster.
For encrypted filesystems, when downloading, you must use the same KMS cluster-wide key or, if configured, the per-filesystem encryption parameters to decrypt the snapshot data. For more information, see KMS management.
The locator
can be a previously saved locator for disaster scenarios, or you can obtain the locator
using the weka fs snapshot
command on a system with a live filesystem with snapshots.
If you need to pause and resume the download process, use the command: weka cluster task pause / resume
. To abort the download process, delete the downloaded filesystem directly. For details, see Background tasks.
Due to the bandwidth characteristics and potential costs when interacting with remote object stores it is not allowed to download a filesystem from a remote object-store bucket. If a snapshot on a local object-store bucket exists, it is advisable to use that one. Otherwise, follow the procedure inRecover from a remote snapshot.
Manage synchronous snapshots
The workflow to manage the synchronous snapshots includes:
Upload snapshots using, for example, the snapshots scheduler. See Snapshots.
Download the synchronous snapshot (described below).
Download a synchronous snapshot
Command: weka fs snapshot download
Use the following command line to download a synchronous snapshot. This command is only relevant for snapshots uploaded from a system of version 4.3 and later:
weka fs snapshot download <file-system> <locator>
Make sure to download synchronous snapshots in chronological order. Non-chronological snapshots are inefficient and are not synchronous.
If you need to download a snapshot earlier than the latest downloaded one, for example, when you need one of the daily synchronous snapshots after the weekly synchronous snapshot was downloaded, add the --allow-non-chronological
flag to download it anyway.
Parameters
file-system
*
Name of the filesystem.
locator
*
Object store locator obtained from a previously successful snapshot upload.
If you need to pause and resume the download process, use the command: weka cluster task pause / resume
. To abort the download process, delete the downloaded snapshot directly. For details, see Background tasks.
Related topics
Recover from a remote snapshot
When recovering a snapshot residing on a remote object store, it is required to define the object store bucket containing the snapshot as a local bucket.
A remote object store has restrictions over the download, and we want to use a different local object store due to the QoS reasons explained in Manage object stores.
To recover a snapshot residing on a remote object store, create a new filesystem from this snapshot as follows:
Add a new local object-store, using
weka fs tier obs add
CLI command.Add a local object-store bucket, referring to the bucket containing the snapshot to recover, using
weka fs tier s3 add.
Download the filesystem, using
weka fs download.
If the recovered filesystem should also be tiered, add a local object store bucket for tiering using
weka fs tier s3 add.
Detach the initial object store bucket from the filesystem.
Assuming you want a remote backup to this filesystem, attach a remote bucket to the filesystem.
Remove the local object store bucket and local object store created for this procedure.
Last updated