Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The WEKA system offers a converged deployment configuration as an alternative to the standard setup. In this configuration, hundreds of application servers running user applications are equipped with WEKA clients, allowing them to access the WEKA cluster.
Unlike the standard deployment that dedicates specific servers to WEKA backends, the converged setup involves installing a WEKA client on each application server. Additionally, one or more SSDs and backend processes (WekaFS) are integrated into the existing application servers.
The WEKA backend processes function collectively as a single, distributed, and scalable filesystem, leveraging the local SSDs. This filesystem is accessible to the application servers, much like in the standard WEKA system deployment. The critical distinction is that, in this configuration, WEKA backends share the same physical infrastructure as the application servers.
This blend of storage and computing capabilities enhances overall performance and resource usage. However, unlike the standard deployment, where an application server failure does not impact other backends, the converged setup is affected if an application server is rebooted or experiences a failure. The N+2 (or N+4) scheme still protects the cluster and can tolerate two concurrent failures. As a result, converged WEKA deployments require more careful integration and detailed coordination between computational and storage management practices.
In all other respects, this configuration mirrors the standard WEKA system, offering the same functionality features for protection, redundancy, failed component replacement, failure domains, prioritized data rebuilds, and seamless distribution, scale, and performance. Some servers may house a WEKA backend process and a local SSD, while others may have WEKA clients only. This allows for a cluster of application servers with a mix of WEKA software and WEKA clients, delivering a flexible solution.
Welcome to the WEKA Documentation Portal, your guide to the latest WEKA version. Whether you're a newcomer or a seasoned user, explore topics from system fundamentals to advanced optimization strategies. Choose your WEKA version from the top menu for version-specific documentation.
Important: This documentation applies to the WEKA system's latest minor version (4.2.X). For information on new features and supported prerequisites released with each minor version, refer to the relevant release notes available at get.weka.io.
Check the release notes for details about any updates or changes accompanying the latest releases.
Sevii AI quickly delivers answers from WEKA documentation. Type your question and click . For the best results, ask clear, context-rich questions.
This portal encompasses all documentation essential for comprehending and operating the WEKA system. It covers a range of topics:
WEKA system overview: Delve into the fundamental components, principles, and entities constituting the WEKA system.
Planning and installation: Discover prerequisites, compatibility details, and installation procedures for WEKA clusters on bare metal, AWS, GCP, and Azure environments.
Getting started with WEKA: Initiate your WEKA journey by learning the basics of managing a WEKA filesystem through the GUI and CLI, executing initial IOs, and exploring the WEKA REST API.
Performance: Explore the results of FIO performance tests on the WEKA filesystem, ensuring optimal system performance.
WEKA filesystems & object stores: Understand the role and management of filesystems, object stores, filesystem groups, and key-management systems within WEKA configurations.
Additional protocols: Learn about the supported protocols—NFS, SMB, and S3—for accessing data stored in a WEKA filesystem.
Operation guide: Navigate through various WEKA system operations, including events, statistics, user management, upgrades, expansion, and more.
Billing & licensing: Gain insights into WEKA system licensing options and alternative billing approaches.
Monitor the WEKA cluster: Effectively monitor your WEKA cluster by deploying the WEKA Management Server (WMS) alongside tools like Local WEKA Home, WEKAmon, and SnapTool.
WEKA support: Find guidance on obtaining support for the WEKA system and effectively managing diagnostics.
Best practice guides: Explore our carefully selected guides, starting with WEKA and Slurm integration, to discover expert-recommended strategies and insights for optimizing your WEKA system and achieving peak performance in various scenarios.
The documentation marks the CLI mandatory parameters with an asterisk (*).
We welcome your feedback to improve our documentation. Include the document version and topic title with your suggestions and email them to . For technical inquiries, contact our . Thank you for helping us maintain high-quality resources.
Redundancy in WEKA system deployments can vary, ranging from 3+2 to 16+4. Choosing the most suitable configuration involves several key considerations, including redundancy levels, data stripe width, hot spare capacity, and the performance required during data rebuilds.
Redundancy can be configured as N+2 or N+4, directly impacting capacity and performance. A redundancy level of 2 is typically sufficient for most configurations, while redundancy levels of 4 are reserved for larger clusters with 100 or more backends or critical data scenarios.
Data stripe width, ranging from 3 to 16, is crucial in optimizing net capacity. Larger stripe widths offer more net capacity but may affect performance during data rebuilds, particularly for highly critical data. Consultation with the Customer Success Team is recommended in such cases.
The required hot spare capacity depends on how quickly faulty components can be replaced. Systems with faster response times or guaranteed 24/7 service require less hot spare capacity than systems with less frequent component replacement schedules.
The performance required during a data rebuild from a failure primarily relates to read rebuild operations. Unlike many other storage systems, write performance remains unaffected by failures and rebuilds in WEKA systems because they continue to write to functioning backends within the cluster. However, read performance can be impacted when reading data from a failed component, as this process requires retrieving data from the entire stripe. It requires simultaneous operations and immediate priority for data read operations. For instance, consider a scenario where a single failure occurs in a cluster of 100 backends. In this case, the overall performance is affected by a relatively modest 1%. However, in a cluster of 100 backends with a stripe width of 16, the initial phase of the rebuild can lead to a more significant reduction in performance, up to 16%. In large clusters, the cluster size may exceed the stripe width or the number of failure domains. To maintain optimal performance during rebuilds, it is advisable to ensure that the stripe width is carefully chosen relative to the cluster size.
As a general guideline for large clusters, it's recommended that the stripe width should not exceed 25% of the cluster size. For example, in a cluster composed of 40 backends, an 8+2 protection scheme is advisable. This configuration helps mitigate the impact on performance in case of a failure, ensuring that it does not exceed 25%.
Write performance in the WEKA system improves as the stripe width increases. This improvement is due to the system having to compute a smaller proportion of protected data than actual data. This effect is particularly notable in scenarios involving substantial write operations, such as systems accumulating data for the first time.


WEKA is a software solution that enables the implementation of a shareable, scalable, distributed filesystem storage.
The WEKA filesystem (WekaFS™) redefines storage solutions with its software-only approach, compatible with standard AMD or Intel x86-based servers and NVMe SSDs. It eliminates the need for specialized hardware, allowing easy integration of technological advancements without disruptive upgrades. WekaFS addresses common storage challenges by removing performance bottlenecks, making it suitable for environments requiring low latency, high performance, and cloud scalability.
Use cases span various sectors, including AI/ML, Life Sciences, Financial Trading, Engineering DevOps, EDA, Media Rendering, HPC, and GPU pipeline acceleration. Combining existing technologies and engineering innovations, WekaFS delivers a powerful, unified solution that outperforms traditional storage systems, efficiently supporting various workloads.
WekaFS is a fully distributed parallel filesystem leveraging NVMe Flash for file services. Integrated tiering seamlessly expands the namespace to and from HDD object storage, simplifying data management. The intuitive GUI allows easy administration of exabytes of data without specialized storage training.
WekaFS stands out with its unique architecture, overcoming legacy systems’ scaling and file-sharing limitations. Supporting POSIX, NFS, SMB, S3, and GPUDirect Storage, it offers a rich enterprise feature set, including snapshots, clones, tiering, cloud-bursting, and more.
Benefits include high performance across all IO profiles, scalable capacity, robust security, hybrid cloud support, private/public cloud backup, and cost-effective flash-disk combination. WekaFS ensures a cloud-like experience, seamlessly transitioning between on-premises and cloud environments.
WekaFS functionality running in its RTOS within the Linux container (LXC) is comprised of the following software components:
File services (frontend): Manages multi-protocol connectivity.
File system computing and clustering (backend): Manages data distribution, data protection, and file system metadata services.
SSD drive agent: Transforms the SSD into an efficient networked device.
By bypassing the kernel, WekaFS achieves faster, lower-latency performance, portable across bare-metal, VM, containerized, and cloud environments. Efficient resource consumption minimizes latency and optimizes CPU usage, offering flexibility in shared or dedicated environments.
WekaFS design departs from traditional NAS solutions, introducing multiple filesystems within a global namespace that share the same physical resources. Each filesystem has its unique identity, allowing customization of snapshot policies, tiering, role-based access control (RBAC), quota management, and more. Unlike other solutions, filesystem capacity adjustments are dynamic, enhancing scalability without disrupting I/O.
The WEKA system offers a robust, distributed, and highly scalable storage solution, allowing multiple application servers to access shared filesystems efficiently and with solid consistency and POSIX compliance.
Related information
Understand the key terms of WEKA system capacity management and the formula for calculating the net data storage capacity.
Raw capacity is the total capacity on all the SSDs assigned to a WEKA system cluster. For example, 10 SSDs of one terabyte each have a total raw capacity of 10 terabytes. This is the total capacity available for the WEKA system. This will change automatically if more servers or SSDs are added.
Object connector: Read and write to the object store.


Net capacity is the space for user data on the SSDs in a configured WEKA system. It is based on the raw capacity minus the WEKA filesystem overheads for redundancy protection and other needs. This will change automatically if more servers or SSDs are added.
The stripe width is the number of blocks with a common protection set, ranging from 3 to 16. The WEKA system has distributed any-to-any protection. Consequently, in a system with a stripe width of 8, many groups of 8 data units spread on various servers protect each other (rather than a group of 8 servers forming a protection group). The stripe width is set during the cluster formation and cannot be changed. Stripe width choice impacts performance and space.
Protection Level refers to the number of extra protection blocks added to each data stripe in your storage system. These blocks help protect your data against hardware failures. The protection levels available are:
Protection level 2: Can survive 2 concurrent disk or server failures.
Protection level 4: Can survive 4 concurrent disk failures or 2 concurrent server failures.
A higher protection level means better data durability and availability but requires more storage space and can affect performance.
Key points:
Durability:
Higher protection levels offer better data protection.
Level 4 is more durable than level 2.
Availability:
Ensures system availability during hardware failures.
Level 4 maintains availability through more extensive failures compared to level 2.
Space and performance:
Higher protection levels use more storage space.
They can also slow down the system due to additional processing.
Configuration:
The protection level is set during cluster formation and cannot be changed later.
If not configured, the system defaults to protection level 2.
A failure domain is a group of WEKA servers that can fail concurrently due to a single root cause, such as a power circuit or network switch failure.
A cluster can be configured with explicit or implicit failure domains:
In a cluster with explicit failure domains, each group of blocks that protect each other is spread on different failure domains.
In a cluster with implicit failure domains, the group of blocks is spread on different servers, and each server is a failure domain. Additional failure domains can be added, and new servers can be added to any existing or new failure domain.
A hot spare is the number of failure domains that the system can lose, undergo a complete rebuild of data, and still maintain the same net capacity. All failure domains are constantly participating in storing the data, and the hot spare capacity is evenly spread within all failure domains.
The higher the hot spare count, the more hardware is required to obtain the same net capacity. On the other hand, the higher the hot spare count, the more relaxed the IT maintenance schedule for replacements. The hot spare is defined during cluster formation and can be reconfigured anytime.
After deducting the protection and hot spare capacity, only 90% of the remaining capacity can be used as net user capacity, with the other 10% of capacity reserved for the WEKA filesystems. This is a fixed formula that cannot be configured.
The provisioned capacity is the total capacity assigned to filesystems. This includes both SSD and object store capacity.
The available capacity is the total capacity used to allocate new filesystems, net capacity minus provisioned capacity.
The net capacity of the WEKA system is obtained after the following three deductions performed during configuration:
The level of protection required is the storage capacity dedicated to system protection.
The hot spare(s) is the storage capacity set aside for redundancy and to allow for rebuilding following a component failure.
WEKA filesystem overhead to improve overall performance.
Examples:
Scenario 1: A homogeneous system of 10 servers, each with one terabyte of Raw SSD Capacity, one hot spare, and a protection scheme of 6+2.
Scenario 2: A homogeneous system of 20 servers, each with one terabyte of Raw SSD Capacity, two hot spares, and a protection scheme of 16+2.
4.2.18
Maintenance release.
Removed Azure deployment sections due to a known network issue in Azure deployments that causes performance degradation in versions 4.1 to 4.4.0. This issue is resolved in version 4.4.1 and later.
For updated guidance, see the sections in the Version 4.4 documentation.
4.2.17
DPDK and AWS Xen-on-Nitro incompatibility identified: Customers using legacy Amazon EC2 Xen-on-Nitro instances must contact AWS Support to request their account be added to the deny list.
Corrected client address representation in WEKA events: Resolved an issue where client addresses were displayed as loopback addresses; they now display the correct IP address.
Duplicate management address alert added: WEKA now generates an alert when duplicate management addresses are configured, indicating a binding failure.
Synchronous Snap: The Synchronous Snap feature, which allows incremental snapshots to be downloaded from an object store, was temporarily disabled in version 4.2.3. It has been re-enabled in version 4.3.0.
4.2.16
New event: DriverNotAccepting: Introduced a new event signaling that all I/O operations on a frontend are unresponsive or hanging.
New event: DriveImmediateShutdown: Added a new event to indicate an NVMe failure, triggering an immediate shutdown of the affected drive to prevent further system impact.
Network virtual function limitation: Implemented the ability to limit the quantity of network virtual functions (VFs). This feature is particularly relevant for clusters with high core counts using Intel E810 NICs.
The WEKA system offers a range of powerful functionalities designed to enhance data protection, scalability, and efficiency, making it a versatile solution for various storage requirements.
The WEKA system employs N+2 or N+4 protection, ensuring data protection even in the face of concurrent drive or backend failures and maintaining the WEKA system up and running to provide continuous services. This complex protection scheme is determined during cluster formation and can vary, offering configurations starting from 3+2 up to 16+2 for larger clusters.
The WEKA system incorporates an any-to-any protection scheme that ensures the rapid recovery of data in the event of a backend failure. Unlike traditional storage architectures, where redundancy is often established across backend servers (backends), WEKA's approach leverages groups of datasets to protect one another within the entire cluster of backends.
Here's how it works:
Data recovery process: If a backend within the cluster experiences a failure, the WEKA system initiates a rebuilding process using all the other operational backends. These healthy backends work collaboratively to recreate the data that originally resided on the failed backend. Importantly, all this occurs in parallel, with multiple backends simultaneously reading and writing data.
Speed of rebuild: This approach results in a speedy rebuild process. In a traditional storage setup, only a small subset of backends or drives actively participate in rebuilding, often leading to slow recovery. In contrast, in the WEKA system, all but the failed backend are actively involved, ensuring swift recovery and minimal downtime.
Scalability benefits: The advantages of this distributed network scheme become even more apparent as the cluster size grows. In larger clusters, the rebuild process is further accelerated, making the WEKA system an ideal choice for organizations that need to handle substantial data volumes without sacrificing data availability.
In summary, the WEKA system's distributed network scheme transforms data recovery by involving all available backends in the rebuild process, ensuring speedy and efficient recovery, and this efficiency scales with larger clusters, making it a robust and scalable solution for data storage and protection.
In the WEKA system, a hot spare is configured within the cluster to provide the additional capacity needed for a full recovery after a rebuild across the entire cluster. This differs from traditional approaches, where specific physical components are designated hot spares. For instance, in a 100-backend cluster, sufficient capacity is allocated to rebuild the data and restore full redundancy even after two failures. The system can withstand two additional failures depending on the protection policy and cluster size.
This strategy for replacing failed components does not compromise system vulnerability. In the event of a system failure, there's no immediate need to physically replace a failed component with a functional one to recreate the data. Instead, data is promptly regenerated, while replacing the failed component with a working one is a background process.
In the WEKA system, failure domains are groups of backends that could fail due to a single underlying issue. For instance, if all servers within a rack rely on a single power circuit or connect through a single ToR switch, that entire rack can be considered a failure domain. Imagine a scenario with ten racks, each containing five WEKA backends, resulting in a cluster of 50 backends.
To enhance fault tolerance, you can configure a protection scheme, such as 6+2 protection, during the cluster setup. This makes the WEKA system aware of these possible failure domains and creates a protection stripe across the racks. This means the 6+2 stripe is distributed across different racks, ensuring that the system remains operational even in case of a complete rack failure, preventing data loss.
It's important to note that the stripe width must be less than or equal to the count of failure domains. For instance, if there are ten racks, and one rack represents a single point of failure, having a 16+4 cluster protection is not feasible. Therefore, the level of protection and support for failure domains depends on the stripe width and the chosen protection scheme.
In the WEKA system, every client installed on an application server directly connects to the relevant WEKA backends that store the required data. There's no intermediary backend that forwards access requests. Each WEKA client maintains a synchronized map, specifying which backend holds specific data types, creating a unified configuration shared by all clients and backends.
When a WEKA client attempts to access a particular file or offset in a file, a cryptographic hash function guides it to the appropriate backend containing the needed data. This unique mechanism enables the WEKA system to achieve linear performance growth. It synchronizes scaling size with scaling performance, providing remarkable efficiency.
For instance, when new backends are added to double the cluster's size, the system instantly redistributes part of the filesystem data between the backends, resulting in an immediate double performance increase. Complete data redistribution is unnecessary even in modest cluster growths, such as moving from 100 to 110 backends. Only a fraction (10% in this example) of the existing data is copied to the new backends, ensuring a balanced distribution and active participation of all backends in read operations.
The speed of these seamless operations depends on the capacity of the root backends and network bandwidth. Importantly, ongoing operations remain unaffected, and the system's performance improves as data redistribution occurs. The finalization of the redistribution process optimizes both capacity and performance, making the WEKA system an ideal choice for scalable and high-performance storage solutions.
WEKA offers a cluster-wide data reduction feature that can be activated for individual filesystems. This capability employs block-variable differential compression and advanced de-duplication techniques across all filesystems to significantly reduce the storage capacity required for user data, resulting in substantial cost savings for customers.
The effectiveness of the compression ratio depends on the specific workload. It is particularly efficient when applied to text-based data, large-scale unstructured datasets, log analysis, databases, code repositories, and sensor data. For more information, refer to the dedicated topic.
Many hardware vendors ship their products with the SR-IOV feature disabled. The feature must be enabled on such platforms before installing the Weka system. Enabling the SR-IOV applies to the server BIOS.
If the SR-IOV is already enabled, it is recommended to verify the current state before proceeding with the installation of the Weka system.
Verify that the NIC drivers are installed and loaded successfully. If it still needs to be done, perform the Install NIC drivers procedure.
The following procedure is a vendor-specific example and is provided as a courtesy. Depending on the vendor, the same settings may appear differently or be located elsewhere. Therefore, refer to your hardware platform and NIC vendor documentation for the latest information and updates.
Reboot the server and enter the BIOS Setup.
From the Advanced menu, select the PCIe Configuration to display its properties.
Select the SR-IOV support and enable it.
Save the configuration changes and exit.
A known network issue in Azure deployments causes performance degradation in versions 4.1 to 4.4.0. This issue is resolved in version 4.4.1 and later.
New Azure deployments: Install version 4.4.1 or later.
Existing deployments on Azure (versions 4.1–4.4.0): Contact the Customer Success Team for migration assistance.
See the sections in the Version 4.4 documentation.
Understanding the WEKA system client and possible mount modes of operation in relation to the page cache.
The WEKA client is a standard POSIX-compliant filesystem driver installed on application servers, facilitating file access to WEKA filesystems. Acting as a conventional filesystem driver, it intercepts and executes all filesystem operations, providing applications with local filesystem semantics and performance—distinct from NFS mounts. This approach ensures centrally managed, shareable, and resilient storage for WEKA.
Tightly integrated with the Linux Page Cache, the WEKA client leverages this transparent caching mechanism to store portions of filesystem content in the client's RAM. The Linux operating system maintains a page cache in the unused RAM, allowing rapid access to cached pages and yielding overall performance enhancements.
The Linux Page Cache, implemented in the Linux kernel, operates transparently to applications. Utilizing unused RAM capacity, it incurs minimal performance penalties, often appearing as "free" or "available" memory.
This section aims at a system engineer familiar with the GCP concepts and experienced in using Terraform to deploy a system on GCP.
Leveraging GCP's advantages, WEKA offers a customizable terraform-gcp-weka module for deploying the WEKA cluster on GCP. In GCP, WEKA operates on instances, each capable of using up to eight partitions of drives on the connected physical server (without direct drive usage). These drives can be shared among partitions for other clients on the same server.
WEKA requires a minimum of four VPC networks, each associated with one of the instances. This configuration aligns with the four key WEKA processes: Compute, Drive, Frontend, and Management, with each process requiring a dedicated network interface as follows:
This page provides an overview about managing object stores.
Object stores in WEKA serve as optional external storage media, complementing SSD storage with a more cost-effective solution. This allows for the strategic allocation of resources, with object stores accommodating warm data (infrequently accessed) and SSDs handling hot data (frequently accessed).
In WEKA, object store buckets can be distributed across different physical object stores. However, to ensure optimal Quality of Service (QoS), a crucial mapping between the bucket and the physical object store is required.
WEKA treats object stores as physical entities, either on-premises or in the cloud, grouping multiple object store buckets. These buckets can be categorized as either local (used for tiering and snapshots) or remote (exclusively for snapshots). An object-store bucket must be added to an object store with the same type and remain inaccessible to other applications.
While a single object store bucket can potentially serve different filesystems and multiple WEKA systems, it is advisable to dedicate each bucket to a specific filesystem. For instance, if managing three tiered file systems, assigning a dedicated local object storage bucket to each file system is recommended.
For each filesystem, users can attach up to three object store buckets:
WEKA provides a ready-to-deploy Terraform package for installing the WEKA cluster on AWS Virtual Private Cloud (VPC).
The following diagram provides an overview of the various steps automated with the Terraform-driven provisioning of the WEKA cluster backend servers on AWS EC2 instances.
Learn about the cluster deployment types in AWS, which are defined by the instance types and their configuration.
Once the Terraform modules are applied, two workflows are running every minute. One for scale-up and the other for scale-down.
WEKA provides a cloud function for scale-up or scale-down of the number of compute engine instances (cluster size). Terraform automatically creates the cluster according to the specified target value in a few minutes.
To change the cluster size (up or down), specify the link to the resize cloud function on GCP and the resize target value for the number of compute engine instances in the following command and run it:
Example:
Related topics
This page provides an overview about managing filesystems.
The management of filesystems is an integral part of the successful running and performance of the WEKA system and overall data lifecycle management.
Related topics
This page describes how to transition from an SSD-only to a tiered filesystem, and vice versa.
An SSD-only filesystem can be reconfigured as a tiered filesystem by attaching an object store. In such a situation, the default is to maintain the filesystem size. In order to increase the filesystem size, the total capacity field can be modified, while the existing SSD capacity remains the same.
This page provides an overview about managing filesystem groups.
A filesystem group in the WEKA system is used specifically to manage tiering policies for filesystems. It defines key parameters, including the drive retention period and the tiering queue time, which determine how and when data is tiered.
When you add a filesystem, it must be associated with a filesystem group to apply these tiering behaviors. The WEKA system supports up to eight filesystem groups, allowing flexibility in managing tiering policies across different filesystems.
Related topics
The GCP Console has a interface in which you can view the cloud function logs related to the WEKA cluster activities, such as when scaling instances up or down. In addition, the cluster state file retained in the cloud storage provides you with the status of the operations in the WEKA project.
Typical troubleshooting flow if the resize cloud function does not resize the cluster
Open the cluster state file and check that the desired_size is as expected and the clusterized value is true. The cluster state file is in the cloud storage, and its name comprises the prefix and cluster_name
This page provides a detailed description of how data storage is managed in SSD-only and tiered WEKA system configurations.
This section explains how data lifecycle is maintained when working with a tiered WEKA system configuration, together with the options for control. The following subjects are covered:
Advanced explanation of .
System behavior when .
A tiered filesystem can be un-tiered (and only use SSDs) by detaching its object stores. This will copy the data back to the SSD.
For more information, refer to Attaching/Detaching Object Stores Overview.
Create AWS Launch Template and Auto Scaling Group for WEKA cluster expansion: Create an AWS Launch Template and Auto Scaling Group to provision EC2 instances for the WEKA cluster.
The launch template automates the deployment script to install and configure WEKA software during initial cluster creation and expand the cluster with additional instances.
Configure AWS Secrets Manager for secure WEKA cluster operations: Create secrets in AWS Secrets Manager to facilitate secure communication between AWS Lambda functions and the WEKA cluster. This ensures smooth scale-out, scale-in, and auto-healing operations.
Configure DynamoDB for Terraform state: Create state items in an Amazon DynamoDB table to effectively manage Terraform's declarative state.
Create CloudWatch log groups for WEKA cluster logs: Create Amazon CloudWatch log groups to store logs generated by the WEKA cluster.
Deploy AWS Lambda functions for WEKA software configuration: Create AWS Lambda functions to run after CloudWatch log groups are created. These functions assist in installing and configuring WEKA software on EC2 instances.
Create AWS Step Function for WEKA cluster scaling: Create an AWS Step Function state machine to facilitate user-driven automated scale-out and scale-in operations for the WEKA cluster.
Create CloudWatch event rule for WEKA cluster monitoring: Create a CloudWatch event rule to periodically check the state of the WEKA cluster and trigger healing or scaling actions as necessary.


Check the scale-up workflow (or scale-down workflow). Check the function that didn't work and its related logs in the Logs Explorer of the GCP Console.
curl -m 70 -X POST https://<resize_cloud_function_name> \
-H "Authorization:bearer $(gcloud auth print-identity-token)" \
-H "Content-Type:application/json" \
-d '{"value":<Resize_target_value>}'curl -m 70 -X POST https://europe-west1-wekaio-qa.cloudfunctions.net/weka-test \
-H "Authorization:bearer $(gcloud auth print-identity-token)" \
-H "Content-Type:application/json" \
-d '{"value":7}'


Maximum number of processes increased: The maximum number of backend processes, drive processes, management processes, and total processes have all been increased.
Amazon Linux 2023: Added WEKA backend and client support for Amazon Linux 2023 with x86_64 kernel distribution.
New CLI reference guide: This CLI reference guide is generated from the output of running the weka command with the help option. It provides detailed descriptions of available commands, arguments, and options. >>>
Added support for GCP regions asia-southeast2 and europe-central2 in Terraform configuration.
Added Dell PowerScale S3 (version 9.8.0.0 and higher) to the certified object stores.
4.2.15
Added a verification step for LLQ and WC in the upgrade workflow. To ensure proper LLQ functionality after upgrades, verify that Write Combining (WC) is enabled in the igb_uio driver.
Extended support for RHEL/Rocky Linux 8.10 on backends and clients.
4.2.14
Extended support for operating systems on:
Clients: RHEL/Rocky Linux 9.4, AlmaLinux 9.4, 8.10, Debian 12.
Backends: RHEL/Rocky Linux 9.4.
Update: Object store types now included in Analytics reports on WEKA Home.
New topic:
4.2.12.92
Extended support for Linux kernel to Ubuntu 22:
5.19, 6.2, 6.5.
4.2.12
Extended support for operating systems on clients: RHEL/Rocky Linux 9.3, Oracle Linux 9.
Extended support for Las_v3 machine types for backends on Microsoft Azure.
4.2.11
Customers can now see originating IP address information in S3 logs via the X-Forwarded-For header.
NFS Floating IP failover now contains a timeout as a mitigating factor during prolonged outages.
Limit increases for drives (20k), processes (20k), and NUMA nodes (20).
Added Rocky Linux 9.3 to supported operating systems on backends.
4.2.10
For both files and directories, relatime now produces reliable atime updates on large clusters. Previously, some conditions caused atime values to revert.
Customers can now transition between custom and predefined S3 bucket policies with one command.
4.2.9
Path optimization for S3 requests.
Custom bucket policies now accept only valid JSON through REST API.
4.2.8
Active-active network port usage in HA configurations using RDMA.
Better handling for cgroups with small memory footprints.
More graceful client upgrades with clusters that have been scaled in, preventing communication with backends that no longer exist.
STS session duration option available for IAM AssumeRole use with Amazon S3.
4.2.7
Initial support for the namespace feature in Hashicorp Vault.
Enhanced performance serving S3 GET requests using byte-range fetching.
Faster cleanup of failed multi-part upload parts.
Added support for Ubuntu 22.04.3 point release with 6.2-based kernel.
SMB-W now supports creating local mappings for AD users and groups using the rid ID-mapping, alongside the existing RFC2307 support.
Added support for using AssumeRole with Amazon IAM STS tokens when using Amazon S3.
Added support for floating IPs in AWS when using NFS.
4.2.6
Added IMDSv2 to the supported Amazon EC2 instances.
Certified Broadcom BCM957508-P2100G Dual-Port 100 Gb/s QSFP56 as supported.
Added support for IOMMU on WEKA backend servers with Mellanox NICs.
Certified OFED 23.10-0.5.5.0.
Added support for RHEL/Rocky Linux 9.2 operating system.
Updated the default provider of SMB services to SMB-W, replacing the legacy SMB.
4.2.5
Alerts are in place for the use of duplicate IP addresses.
Reduced likelihood of receiving false alerts for full quotas.
Better error handling for aborted S3 MultiPartUpload requests.
The weka cluster failure-domain CLI command is enhanced with more component fields.
No more umount/mount cycling for SCMC clients during upgrades.
4.2.4
Certified CX7 (Infiniband) as supported.
Non-disruptive upgrades of clients in more configurations.
When using NFS, the df command now reports WEKA quotas as expected.
ILM Policy deploys now use less memory when scaled to a billion objects.
Deleting empty buckets via S3 API works as expected.
To upgrade to 4.2.4, the source version must be 4.1.2.
If the S3 protocol is configured, contact Customer Success to confirm that ETCD (internal key-value store) has been upgraded to KWAS.
4.2.3
Added operating systems supported on clients: RHEL/Rocky Linux 8.8 and SuSe 15 SP4.
Added support for Vault 1.14 (certified from WEKA release 4.2.1).
NFS client connections are steady during floating address migration.
The weka local upgrade command now supports servers without frontend containers.
The weka status command output newly reflects the unavailable capacity.
The Synchronized Snapshot feature (the ability to perform incremental snapshots downloaded from an object store) is temporarily disabled. This feature will be reinstated upon subsequent release.
4.2.2
N/A
4.2.1
IAM support on GCP: You can access Google Cloud Storage using a service account with the required permissions granted by the IAM role.
NDU improvement: The non-disruptive upgrade process is now improved by upgrading the compute containers one at a time (rolling upgrade) while the remaining containers continue serving the clients.
E810 NIC support on MCB: The Intel E810 NIC is now supported in the multi-container backends (MCB) architecture.
Added support for Rocky 9.0, 9.1, and Ubuntu 22.04 on backends.
4.2.0
Mount filesystems from multiple clusters on a single client: You can mount filesystems from up to seven clusters in parallel on a single WEKA client for enhanced performance and use cases.
Snapshots improvements:
Quickly download a previously taken snapshot to another cluster. Following that operation, metadata is auto-prefetched.
Allow IO operations continuity to the filesystem while restoring a snapshot (using a preserved snapshot name).
Added abort/pause snapshot download functions.
WEKA CSI Plugin enhancements:
Can control WEKA mount options through the storage class.
Snapshots and volume cloning.
Added support for k8s fsGroups.
Increased organizations support: Increased the maximum number of supported organizations to 256 per cluster.
New GUI improvements: The GUI is improved with new features and operations, such as the insights page with top processes usage, drives load, and latency.
Azure cloud enhancement:
Improved performance using DPDK networking and higher MTU.
Added support for auto-scaling.
Interoperability updates:
Added support for Mellanox OFED version 5.9-0.5.6.0.
Added support RHEL/Rocky Linux 9.1/9.0 and Ubuntu 22.04 for clients only (only use applications set with 2MB hugepages).
HashiCorp Vault: Added support for HashiCorp Vault up to version 1.13.
Breaking changes and deprecations:
Removed the auth_token mount option.
Single protocol type per server support: A single WEKA server can now only have one protocol server (S3, SMB, or NFS). Adding an additional protocol process/server is no longer allowed. For clusters being upgraded, distribute the various (S3, SMB, or NFS) protocols across all the backend servers before running the upgrade.
Only the multi-container backend (MCB) architecture is supported: The legacy Single Container Backend (SCB) architecture, where each server in the cluster includes a single container with all the processes running on it, is deprecated in 4.1. To upgrade to 4.2, the source cluster must be in MCB architecture. Contact the WEKA Customer Success team to convert a 4.1 cluster in a legacy architecture to the MCB architecture.
The WEKA client retains control over the Linux Page Cache, enabling cache information management and invalidation when necessary. Consequently, WEKA leverages the Linux Page Cache for high-performance data access, ensuring data consistency across multiple servers.
A filesystem can be mounted in one of two modes with the Linux Page Cache:
Read cache mount mode: Only read operations use Linux Page Cache to sustain RAM-level performance for the frequently accessed data. WEKA ensures that the view of the data is coherent across various applications and clients.
Write cache mount mode (default): Both read and write operations use the Linux Page Cache, maintaining data coherency across servers and providing optimal data performance.
When mounting in the Read Cache mode, the Linux Page Cache uses a write-through mechanism, acknowledging write operations to the customer application only after securely storing them on resilient storage. This applies to both data and metadata writes.
The default behavior in the WEKA system dictates that data read or written by customer applications resides in a local server read Linux Page Cache. The WEKA system actively monitors whether another server attempts to read or write the same data, invalidating the cache entry if necessary. Such invalidation may occur in two cases:
If one client is currently writing to a file that another client is reading or writing.
If one server is currently writing to a file that another server is reading.
This mechanism ensures coherence, allowing full Linux Page Cache usage when either a single server or multiple servers access a file solely for read-only purposes. However, if multiple servers access a file, and at least one performs a write operation, the Linux Page Cache bypasses, and all I/O operations are managed by the backend servers.
Conversely, when either a single server or multiple servers open a file for read-only purposes, the WEKA client fully uses the Linux Page Cache, facilitating read operations directly from memory without accessing the backend servers.
Consider a server as "writing" to a file after the actual first write operation, irrespective of the read/write flags of the open system call.
In this mount mode, the Linux operating system operates in a write-back mode rather than a write-through. When a write operation occurs, it is promptly acknowledged by the WEKA client and temporarily stored in the kernel memory cache. The actual persistence of this data in resilient storage happens as a background operation at a later time.
This mode enhances performance, especially in reducing write latency, while ensuring data coherency. For instance, if a file is accessed through another server, the local cache is invalidated, and the data is synchronized to maintain a consistent view of the file.
To synchronize the filesystem and commit all changes in the write cache—useful, for example, when ensuring synchronization before taking a snapshot—you can employ the following system calls: sync, syncfs, and fsync.
The WEKA client allows multiple mount points for the same filesystem on a single server, supporting different mount modes. This is useful in containerized environments where various server processes require distinct read/write access or caching schemes.
Each mount point on the same server is treated independently for cache consistency. For example, two mounts with write cache mode on the same server may have different data simultaneously, accommodating diverse requirements for applications or workflows on that server.
Unlike file data, file metadata is managed in the Linux operating system through the directory entry (Dentry) cache. While maximizing efficiency in handling directory entries, the Dentry cache is not strongly consistent across WEKA clients. For applications prioritizing metadata consistency, it is possible to configure metadata for strong consistency by mounting without a Dentry cache.
Related topic
eth0: Management VPC
eth1: Compute VPC
eth2: Frontend VPC
eth3: Drive VPC
VPC peering facilitates communication between the WEKA processes, each using its NIC. The maximum allowable number of peers within a VPC is limited to 25 by GCP (you can try to increase the quota, but it depends on the GCP resources availability).
A local object store bucket for tiering and snapshots.
A second local object store bucket for additional tiering and snapshots. Note that adding a second local bucket renders the first local bucket read-only.
A remote object store bucket exclusively for snapshots.
Multiple object store buckets offer flexibility for various use cases, including:
Migrating to different local object stores when detaching a read-only bucket from a filesystem tiered to two local object store buckets.
Scaling object store capacity.
Increasing total tiering capacity for filesystems.
Backing up data in a remote site.
In cloud environments, users can employ cloud lifecycle policies to transition storage tiers or classes. For example, in AWS, users can move objects from the S3 standard storage class to the S3 intelligent tiering storage class for long-term retention using the AWS lifecycle policy.
Related topics
Ensure that WEKA has access to instance metadata. The system uses Instance Metadata Service Version 2 (IMDSv2) by default for enhanced security.
If you deploy in AWS without using the CloudFormation template, or if you add capabilities such as tiering after deployment, provide permissions to several AWS APIs. For details, refer to the .
Ensure the selected subnet has enough available IP addresses. Each core allocated to the WEKA system requires an Elastic Network Interface (ENI).
The selection and configuration of instance types determine the two deployment types:
A client backend deployment uses two different instance types:
Backend instances: Instances that contribute their drives and all possible CPU and network resources to the cluster.
Client instances: Instances that connect to the cluster created by the backend instances and run an application using one or more shared filesystems.
In client backend deployments, you can add or remove clients according to the application's resource requirements.
You can also add backend instances to increase cluster capacity or performance. To remove backend instances, you must first deactivate them to allow for safe data migration.
Stopping or terminating backend instances causes a loss of all data of the instance store. For more information, refer to Amazon EC2 Instance Store.
In a converged deployment, every instance contributes its resources, such as drives, CPUs, and network interfaces, to the cluster.
A converged deployment is suitable for the following scenarios:
Small applications: For applications with low resource requirements that need a high-performance filesystem. The application can run on the same instances that store the data.
Cloud-bursting: For cloud-bursting workloads where you need to maximize resource allocation to both the application and the WEKA cluster to achieve peak performance.
WEKA supports client instances with at least two NICs, one for management and one for the frontend data. It is possible to add more NICs for redundancy and higher performance.
A client with the same VPC networks and subnets as the cluster can connect without additional configuration. If a client is on another VPC network, peering is required between the VPC networks.
The client instance must be in the same region as the WEKA cluster on GCP.
Create a mount point (only once):
Install the WEKA agent (only once):
Example:
Mount a stateless client on the filesystem. In the mount command, specify all the NICs of the client.
DPDK mount with four NICs:
Example:
UDP mount:
Example:
Related topics
The Terraform-AWS-WEKA module is an open-source repository. It contains modules to customize the WEKA cluster installation on AWS. The default protocol deployed using the module is POSIX.
The Terraform-AWS-WEKA module supports public and private cloud deployments. All deployment types require passing the get.weka.io token to Terraform to download the WEKA release from the public get.weka.io service.
The Terraform-AWS-WEKA module consists of the following components:
Required module:
WEKA Root Module is located in the main Terraform module.
Optional sub-modules:
The following is a basic example in which you provide the minimum detail of your cluster, and the Terraform module completes the remaining required resources, such as VPC, subnets, security group, placement group, DNS zone, and IAM roles.
You can use this example as a reference to create the main.tf file.
This page describes how to attach or detach object stores buckets to or from filesystems.
Two local object store buckets can be attached to a filesystem, but only one of the buckets is writable. A local object store bucket is used for both tiering and snapshots. When attaching a new local object store bucket to an already tiered filesystem, the existing local object store bucket becomes read-only, and the new object store bucket is read/write. Multiple local object stores allow a range of use cases, including migration to different object stores, scaling of object store capacity, and increasing the total tiering capacity of filesystems.
When attaching a local object store bucket to a non-tiered filesystem, the filesystem becomes tiered.
Detaching a local object store bucket from a filesystem migrates the filesystem data residing in the object store bucket to the writable object store bucket (if one exists) or to the SSD.
When detaching, the background task of detaching the object store bucket begins. Detaching can be a long process, depending on the amount of data and the load on the object stores.
Detaching an object store bucket is irreversible. Attaching the same bucket again is considered as re-attaching a new bucket regardless of the data stored in the bucket.
Migration to a different object store: When detaching from a filesystem tiered to two local object store buckets, only the read-only object store bucket can be detached. In such cases, the background task copies the relevant data to the writable object store. In addition, the allocated SSD capacity only requires enough SSD capacity for the metadata.
Un-tiering a filesystem: Detaching from a filesystem tiered to one object store bucket un-tiers the filesystem and copies the data back to the SSD. The allocated SSD capacity must be at least the total capacity the filesystem uses.
On completion of detaching, the object store bucket does not appear under the filesystem when using the weka fs command. However, it still appears under the object store and can be removed if any other filesystem does not use it. The data in the read-only object store bucket remains in the object store bucket for backup purposes. If this is unnecessary or the reclamation of object store space is required, it is possible to delete the object store bucket.
Once the migration process is completed, while relevant data is migrated, old snapshots (and old locators) reside on the old object store bucket. To recreate snapshot locators on the new object store bucket, snapshots should be re-uploaded to the (new) bucket.
When migrating data (using the detach operation), copy only the necessary data (to reduce migration time and capacity). However, you may want to keep snapshots in the old object store bucket.
Migration workflow
The order of the following steps is important.
Attach a new object store bucket (the old object store bucket becomes read-only).
Delete any snapshot that does not need to be migrated. This action keeps the snapshot on the old bucket but does not migrate its data to the new bucket.
Detach the old object store bucket.
One remote object store bucket can be attached to a filesystem. A remote object store bucket is used for backup. Only snapshots are uploaded using Snap-To-Object. The snapshot uploads are incremental to the previous one.
Detaching a remote object store bucket from a filesystem keeps the backup data within the bucket intact. It is still possible to use these snapshots for recovery.
Related topics
The WEKA project ultimately uses the internal GCP resources. A basic WEKA project includes a cluster with several virtual private clouds (VPCs), VMs (instances), a load balancer, DNS, cloud storage, a secret manager, and a few more elements that manage the resize of the cluster. The peering between all the virtual networks enables running the functions across all the networks.
A resize cloud function in vpc-0 and a workload listener are deployed for auto-scale instances in GCP. Once a user sends a request for resizing the number of instances in the cluster, the workload listener checks the cluster state file in the cloud storage and triggers the resize cloud function if a resize is required. The cluster state file is an essential part of the resizing decision. It indicates states such as:
Readiness of the cluster.
The number of existing instances.
The number of requested instances.
The secret manager retains the user name (usually admin) and the Terraform-generated password. The resize cloud function uses the user name and password to operate on the cluster instances.
Depending on the required security level, you can deploy the WEKA project on one of the following subnet types:
Public subnet: Use a public subnet within your VPC with an internet gateway, and allow public IP addresses for your instances.
Private subnet shared with a Bastion project: Create a private subnet with a shared project with a Bastion project, a risk-based security solution used for authenticating communication with a public network, such as downloading the WEKA software from get.weka.io. The Bastion project includes a Bastion VM (host) acting as a network gateway. The relevant ports are open (by the Terraform files).
Private subnet shared with a yum project: If a connection to get.weka.io for downloading the WEKA software is impossible, create a private subnet with a yum repository containing the WEKA software. The relevant ports are open (by the Terraform files).
Auto-scaling is useful to easily scale the number of EC2 instances up or down at need.
After deploying the WEKA cluster via CloudFormation, it is possible to create an auto-scaling group to ease the WEKA cluster size management.
You can create an auto-scaling group for your cluster by running the wekactl utility.
You can control the number of instances by either changing the desired capacity of instances from the AWS auto-scaling group console or defining your custom metrics and scaling policy in AWS. Once the desired capacity has changed, WEKA takes care of safely scaling the instances.
For more information and documentation on the utility, refer to the .
Learn how to manage authentication across multiple clusters in the WEKA CLI using connection profiles, enabling seamless switching between clusters without re-authentication.
Managing authentication across multiple clusters in the WEKA CLI is streamlined with connection profiles. By default, when you run the weka user login command, it creates a profile stored as .weka/auth-token.json. This is sufficient for single-cluster environments. However, in a multi-cluster environment, use the --profile parameter to create and manage separate profiles for each cluster. This allows you to switch between clusters without needing to re-authenticate each time, enhancing efficiency and usability.
Profile naming conventions
When creating a connection profile, follow these guidelines:
Maximum length: 50 characters
Allowed characters:
Alphanumeric (A-Z, a-z, 0-9)
Underscores (_)
Profile names dictate where authentication details are stored in the .weka directory:
Default profile: .weka/auth-token.json
Named profiles: .weka/auth-token-<profile-name>.json
The weka user login command supports profiles, enabling you to specify which profile to use or create a new one.
Command syntax:
Default profile: If no profile is specified, the system uses the default profile.
Profile-specific file: Authentication information is saved in a file named after the profile.
Success message: After a successful login, the following message appears:
Failure message: If the profile is not found or the login fails, an error message displays the profile name and file path.
The weka user logout command supports profiles, enabling you to remove the authentication details for a specific profile.
Command syntax:
The specified profile’s authentication file is deleted.
If no profile is specified, the default profile is logged out.
You can specify a profile when executing most WEKA CLI commands using the --profile option. If no profile is provided, the default profile is used.
Command syntax:
Related topic
Scale-out is the process of increasing the number of EC2 instances in the system to handle higher workloads or enhance redundancy.
Scale-out is essential to ensure a system can meet growing demands, maintain performance, and distribute workloads effectively. This proactive approach helps prevent overloads, reduce response times, and maintain high availability.
Action
Increase the desired size of the Auto-Scaling Group (ASG) associated with your WEKA cluster using the AWS Console or AWS CLI.
Result
AWS automatically launches the new EC2 instance.
AWS triggers the Lambda Function to create a join script that runs once as part of the instance user data and, subsequently, integrates the new EC2 instance into the existing WEKA cluster.
You can monitor the process in the AWS Step Function GUI.
Scale-in is the process of reducing the number of EC2 instances of a system to align with decreased workloads or to optimize resource utilization.
Scale-in is essential for efficient resource management, cost reduction, and ensuring the appropriate allocation of resources. It helps prevent over-provisioning, lowers operational expenses, and safeguards against unintentional removal of EC2 instances from the existing WEKA cluster in AWS.
The cluster is configured with scale-in protection and instance termination protection to enhance the safety of this process.
Action
Decrease the desired size of the Auto-Scaling Group (ASG) associated with your WEKA cluster. You can do this through the AWS Console, AWS CLI, or other compatible methods.
Result
After modifying the desired size, it doesn't immediately impact the Auto Scaling Group (ASG). Instead, a Step Function continuously monitors the configuration.
This Step Function runs every minute and identifies that the desired size is less than the current WEKA system's size.
When this condition is met, it initiates a scale-in process, but only if certain conditions are met, such as having enough capacity on the filesystem.
This requirement only applies when manually preparing and installing the WEKA cluster on bare metal servers.
If you are not using the WMS or WSA automated tools for installing a WEKA cluster, manually install a supported OS and the WEKA software on the bare metal server.
Procedure
Follow the relevant Linux documentation to install the operating system, including the required packages.
Required packages
Before a WEKA system can use a Broadcom BCM57508-P2100G, the server must have the necessary drivers and firmware from Broadcom's download center.
Procedure:
Download software bundle: Access Broadcom's download center and download the software bundle onto the target server. Carefully review the instructions included in the bundle.
Snapshots enable the saving of a filesystem state to a directory and can be used for backup, archiving and testing purposes.
Snapshots allow the saving of a filesystem state to a .snapshots directory under the root filesystem. They can be used for:
Physical backup: The snapshots directory can be copied into a different storage system, possibly on another site, using either the WEKA system Snap-To-Object feature or third-party software.
Logical backup: Periodic snapshots enable filesystem restoration to a previous state if logical data corruption occurs.
This page describes how to manage quotas to alert or restrict usage of the WEKA filesystem.
Stripe Size: 4+2
8 backend servers instances of i3en.12xlarge
Amazon Linux AMI 2017.09.0.20170930 x86_64 HVM
Backend servers are placed in the same placement group
7 dedicated cores for WEKA
4 compute
2 drives
1 frontend
c5n.18xlarge instances
For the aggregated results 8 clients have been used
Amazon Linux AMI 2017.09.0.20170930 x86_64 HVM
4 frontend cores
DPDK networking
Mount options: using system defaults
Stripe Size: 4+2
8 backend servers (SYS-2029BT-HNR / X11DPT-B), each:
OS: CentOS Linux release 7.8.2003 (3.10.0-1127.el7.x86_64)
Memory: 384 GB Memory
Drives: 6 Micron 9300 drives (MTFDHAL3T8TDP)
Network: Dual 100 Gbps Ethernet
Cpu/Threads: 24/48 (Intel(R) Xeon(R) Gold 6126 CPU @ 2.60GHz)
19 dedicated cores for WEKA
12 compute
6 drives
1 frontend
SYS-2029BT-HNR / X11DPT-B servers
For the aggregated results 8 clients have been used
OS: CentOS Linux release 7.8.2003 (3.10.0-1127.el7.x86_64)
Memory: 192 GB Memory
Network: Dual 100 Gbps Ethernet
Cpu/Threads: 24/48 (Intel(R) Xeon(R) Gold 6126 CPU @ 2.60GHz)
6 frontend cores
DPDK networking
Mount options: using system defaults
ETCD replacement for S3 protocol improvement: The ETCD component, which stores the IAM format, policies, service accounts, users, STS, and policy mappings, is replaced by a more robust mechanism. When upgrading a cluster running with ETCD from V4.1, the cluster continues to run with ETCD in V4.2, and an alert is raised to migrate to the new mechanism. Contact the WEKA Customer Success team to perform this update.
On a filesystem level: Set a different filesystem per department/project.
On a directory level: Set a different quota per project directory (useful when users are part of several projects) or per-user home directory.
The organization admin can set a quota on a directory. Setting a quota starts the process of counting the current directory usage. Until this process is done, the quota is not considered (for empty directories, this process is instantly done).
The organization admin sets quotas to inform/restrict users from using too much of the filesystem capacity. For that, only data in the user's control is considered. Hence, the quota doesn't count the overhead of the protection bits and snapshots. It does take into account the data and metadata of files in the directory, regardless of whether they are tiered or not.
When working with quotas, consider the following:
To set a quota, the relevant filesystem must be mounted on the server where the set quota command is to be run.
When setting a quota, go through a new mount-point. If you use a server with mounts from WEKA versions before 3.10, first unmount all relevant mount points and then mount them again.
Quotas can be set within nested directories (up to 4 levels of nested quotas are supported) and over-provisioned under the same directory quota tree. For example, the/home directory can have a quota of 1TiB while there are 200 users; each has a user directory under it and can have a quota of 10GiB. This means that over-provisioning is used, in which parent quotas are enforced on all subdirectories, regardless of any remaining capacity in the child quotas.
Moving files (or directories) between two directories with quotas, into a directory with a quota, or outside a directory with a quota is not supported. The WEKA filesystem returns EXDEV in such a case, which is usually converted by the operating system to copy and delete but is OS-dependent.
Once a directory has a quota, only newly created hardlinks within the quota limits are part of quota calculations.
Restoring a filesystem from a snapshot turns the quotas back to the configuration at the time of the snapshot.
Creating a new filesystem from a snap-2-obj does not preserve the original quotas.
When working with enforcing quotas along with a writecache mount-mode, similarly to other POSIX solutions, getting above the quota might not sync all the cache writes to the backend servers. Use sync, syncfs, or fsync to commit the cached changes to the system (or fail due to exceeding the quota).
When a hard quota is set on a directory, running the df utility considers the hard quota as the total capacity of the directory and provides the use% relative to the quota. This can help users understand their usage and proximity to the hard quota.







mkdir /mnt/wekacurl <backend server http address>:14000/dist/v1/install | shclients: Enables creating stateless WEKA clients that automatically join the WEKA cluster during cluster creation. The WEKA clients host applications or workloads.
endpoints: Creates private network VPC endpoints, including EC2 VPC endpoints, S3 gateway, Lambda VPC endpoint, WEKA proxy VPC endpoint, and a security group to open port 1080 for the WEKA proxy VPC endpoint.
IAM: Creates IAM roles for EC2 instances, CloudWatch events, WEKA Lambda functions, and Step Function. IAM roles can be created in advance, or if module variables are unspecified, WEKA automatically creates them.
network: Creates VPC, Internet Gateway/NAT, public/private subnets, and so on if pre-existing network variables are not supplied in advance.
security_group: Automatically creates the required security group if not provided in advance.
Hyphens (-)
Compile and install: Follow the provided instructions to compile and install the following components:
bnxt_en driver.
sliff driver.
niccli command line utility.
Post-installation steps: After installation, run one of the following commands based on the Linux distribution:
dracut -f
update-initramfs -u
Reboot the server: Reboot the server to apply the changes.
After installing Broadcom drivers and software, install the firmware included in the download bundle. Firmware files are typically named after the adapter they are intended for, such as BCM957508-P2100G.pkg.
Procedure:
Identify the target adapter: Use the command niccli --list to list Broadcom adapters and identify the target adapter by its decimal device number:
Identify the device: From the niccli --list output, choose the device identifier (for example, 1 for BCM57508).
Confirm and complete the installation: Follow the prompts to confirm and complete the firmware update.
Reboot the server: Reboot the server to apply the firmware update.
To enable WEKA system compatibility, configure certain NVM options to increase the number of Virtual Functions (VFs) and enable TruFlow.
Procedure:
Increase the number of VFs to 64: Run the following commands:
Enable TruFlow: Run the following commands:
Additional configuration for BCM57508-P2100G: Run the following command:
Reboot the server: Reboot the server to apply the changes.
The adapter is ready for use by the WEKA system.
curl http://10.20.0.2:14000/dist/v1/install | shmount -t wekafs -o net=eth1/IP/NETMASK/GATEWAY -o net=eth2/IP/NETMASK/GATEWAY -o net=eth3/IP/NETMASK/GATEWAY -o mgmt_ip=<management IP (eth0)> -o num_cores=4 -o dpdk_base_memory_mb=32 <backend server IP address>/<filesystem name> /mnt/wekamount -t wekafs -o net=eth1/10.20.30.101/24/10.20.30.1 -o net=eth2/10.20.31.102/24/10.20.31.1 -o net=eth3/10.20.32.103/24/10.20.32.1 -o mgmt_ip=10.20.33.100 -o num_cores=4 -o dpdk_base_memory_mb=32 10.20.30.40/fs1 /mnt/wekamount -t wekafs -o net=udp -o num_cores=0 -o mgmt_ip=<management IP (eth0)> <backend server IP address>/<filesystem name> /mnt/wekamount -t wekafs -o net=udp -o num_cores=0 -o mgmt_ip=10.20.30.100 10.20.30.40/fs1 /mnt/wekaterraform {
required_version = ">= 1.4.6"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 5.5.0"
}
}
}
provider "aws" {
}
module "weka_deployment" {
source = "weka/weka/aws"
version = "1.0.1"
prefix = "weka-tf"
cluster_name = "poc"
availability_zones = ["eu-west-1c"]
allow_ssh_cidrs = ["0.0.0.0/0"]
get_weka_io_token = "Your get.weka.io token"
}
output "weka_deployment_output" {
value = module.weka_deployment
}weka user login --profile <profile-name>Login completed successfully.
<Default/profileN> profile updated.weka user logout --profile <profile-name>weka <command> --profile <profile-name>niccli --dev 1 nvm --setoption enable_sriov --value 1
niccli --dev 1 nvm --setoption number_of_vfs_per_pf --scope 0 --value 0x40
niccli --dev 1 nvm --setoption number_of_vfs_per_pf --scope 1 --value 0x40niccli --dev 1 nvm --setoption enable_truflow --scope 0 --value 1
niccli --dev 1 nvm --setoption enable_truflow --scope 1 --value 1niccli --dev 1 nvm --setoption afm_rm_resc_strategy --value 1# niccli --list
----------------------------------------------------------------------------
Scrutiny NIC CLI v227.0.130.0 - Broadcom Inc. (c) 2023 (Bld-61.52.25.90.16.0)
----------------------------------------------------------------------------
BoardId MAC Address FwVersion PCIAddr Type Mode
1) BCM57508 84:16:0A:3E:0E:20 224.1.102.0 00:0d:00:00 NIC PCI
2) BCM57508 84:16:0A:3E:0E:21 224.1.102.0 00:0d:00:01 NIC PCI# niccli --dev 1 install BCM957508-P2100G.pkgBroadcom NetXtreme-C/E/S firmware update and configuration utility version v227.0.120.0
NetXtreme-E Controller #1 at PCI Domain:0000 Bus:3b Dev:00 Firmware on NVM - v224.1.102.0
NetXtreme-E Controller #1 will be updated to firmware version v227.1.111.0
Do you want to continue (Y/N)?y
NetXtreme-C/E/S Controller #1 is being updated....................................................
Firmware update is completed.
A system reboot is needed for the firmware update to take effect.Install the WEKA software.
Once the WEKA software tarball is downloaded from get.weka.io, run the untar command.
Run the install.sh command on each server, according to the instructions in the Install tab.
Once completed, the WEKA software is installed on all the allocated servers and runs in stem mode (no cluster is attached).
Related topic
(on the Prerequisites and compatibility topic)
Before you begin
Verify that an object store bucket is available.
Procedure
From the menu, select Manage > Filesystems.
On the Filesystem page, select the three dots on the right of the filesystem that you want to attach to the object store bucket. Then, from the menu, select Attach Object Store Bucket.
On the Attach Object Store Bucket dialog, select the relevant object store bucket.
Detaching a local object store bucket from a filesystem migrates the filesystem data residing in the object store bucket either to the writable object store bucket (if one exists) or to the SSD.
Procedure
From the menu, select Manage > Filesystems.
On the Filesystem page, select the filesystem from which you want to detach the object store bucket.
From the Detach Object Store Bucket dialog, select Detach. If the filesystem is attached to two object store buckets (one is read-only, and the other is writable), you can detach only the read-only one. The data of the detached object store bucket is migrated to the writable object store bucket.
In the message that appears, to confirm the detachment, select Yes.
If the filesystem is tiered and only one object store is attached, detaching the object store bucket opens the following message:
Object store buckets usually expand the filesystem capacity. Un-tiering of a filesystem requires adjustment of its total capacity. Select one of the following options:
Increase the SSD capacity to match the current total capacity.
Reduce the total filesystem capacity to match the SSD or used capacity (the decrease option depends on the used capacity).
Configure a different capacity.
Select the option that best meets your needs, and select Continue.
In the message that appears, select Detach to confirm the action.
Archive: Periodic snapshots enable accessing a previous filesystem state for compliance or other needs.
DevOps environments: Writable snapshots enable the execution of software tests on copies of the data.
Snapshots do not impact system performance and can be taken for each filesystem while applications run. They consume minimal space, according to the differences between the filesystem and the snapshots, or between the snapshots, in 4K granularity.
By default, snapshots are read-only, and any attempt to change the content of a read-only snapshot returns an error message.
It is possible to create a writable snapshot. A writable snapshot cannot be changed to a read-only snapshot.
The WEKA system supports the following snapshot operations:
View snapshots.
Create a snapshot of an existing filesystem.
Delete a snapshot.
Access a snapshot under a dedicated directory name.
Restore a filesystem from a snapshot.
Create a snapshot of a snapshot (relevant for writable snapshots or read-only snapshots before being made writable).
List the snapshots and obtain their metadata.
Schedule automatic snapshots. For details, See .
Do not move a file within a snapshot directory or between snapshots: Moving a file within a snapshot directory or between snapshots is implemented as a copy operation by the kernel, similar to moving between different filesystems. However, such operations for directories will fail.
Working with symlinks (symbolic links):
When accessing symlinks through the .snapshots directory, symlinks with absolute paths can lead to the current filesystem. Depending on your needs, consider either not following symlinks or using relative paths.
The maximum number of snapshots in a system depends on whether they are read-only or writeable.
If all snapshots are read-only, the maximum is 24K (24,576).
If all snapshots are writable, the maximum is 14K (14,336).
A system can have a mix of read-only and writable snapshots, given that a writable snapshot consumes about twice the internal resources of a read-only snapshot.
Some examples of mixing maximum read-only and writable snapshots that a system can have:
20K read-only and 4K writable snapshots.
12K read-only and 8K writable snapshots.
Related topics
This guide outlines the customization process for Terraform configurations to deploy the WEKA cluster on AWS. It is designed for system engineers with expertise in AWS and Terraform. Start by creating a main.tf file and adapting it to your AWS deployment requirements. Once configured to your preferences, proceed to apply the changes.
The must be installed on the workstation used for the deployment. Check the minimum required Terraform version specified in the section of the Terraform-AWS-WEKA module.
Review the and use it as a reference for creating the main.tf according to your deployment specifics on AWS.
Tailor the main.tf file to create SMB-W or NFS protocol clusters by adding the relevant code snippet. Adjust parameters like the number of gateways, instance types, domain name, and share naming:
SMB-W
NFS
Add WEKA POSIX clients (optional): If needed, add to support your workload by incorporating the specified variables into the main.tf file:
Once you complete the main.tf settings, apply it: Run terraform apply
When deploying a WEKA cluster on the cloud using Terraform, a default username (admin) is automatically generated, and Terraform creates the password. Both the username and password are stored in the AWS Secrets Manager. This user facilitates communication between the cloud and the WEKA cluster, particularly during scale-up and scale-down operations.
As a best practice, it’s recommended to create a dedicated local user in the WEKA cluster with the Cluster Admin role. This user will serve as a service account for cloud-cluster communications.
Procedure
Create a local user with the Cluster Admin role in the WEKA cluster.
In the AWS Secrets Manager, navigate to Secrets.
Update the weka_username and weka_password services with the username and password of the newly created local user.
Related topic
This page describes registering to get.weka.io and obtaining the WEKA installation packages: WMS, WSA, and WEKA software release.
To sign in to get.weka.io, you first need to create an account and fill in your details. If you already have a registered account for get.weka.io, skip this procedure.
Procedure
Go to the download site, and select Create an account.
The Send Registration Email page opens.
2. Fill in your organization's email address (private mail is prohibited). Select I’m not a robot, and then select Send Registration Email.
3. Check your inbox for a registration email from Weka.io. To confirm your registration, select the link. The Create Your Account page opens.
4. Fill in your email address, full name, and password. Then, select Create Account.
Your request for access to is sent to WEKA for review. Wait for a validation email. Once your registration is approved, you can sign in to .
Download the required WEKA installation packages according to the workflow path.
Path A (automated with WMS and WSA): Download the WMS and WSA ISOs from . The WMS is downloaded from a dedicated dropdown. The WSA is found in the relevant release page.
Path B (automated with WSA): Download the WSA package from The WSA is found in the relevant release page.
Path C (manual installation and configuration): Download the WEKA software tarball from . The tarball is found in the relevant release page.
You can only sign in and download the packages if you are a registered user.
Procedure: Download from get.weka.io
Go to the download site, and sign in with your registered account.
page opens.
Do one of the following:
Select the required package from the dashboard.
Select the Releases tab, select the required release, and follow the download instructions. (The token in the download link is purposely blurred.)
Depending on the workflow path you follow, go to one of the following:
(path A)
(path B)
(path C)
This page describes how to view and manage filesystem groups using the GUI.
Using the GUI, you can perform the following actions:
The filesystem groups are displayed on the Filesystems page. Each filesystem group indicates the number of filesystems that use it.
Procedure
From the menu, select Manage > Filesystems.
A filesystem group is required when adding a filesystem. You can create more filesystem groups if you want to apply a different tiering policy on specific filesystems.
Procedure
From the menu, select Manage > Filesystems.
Select the + sign right to the Filesystem Groups title.
In the Create Filesystem Group dialog, set the following:
Select Create.
Related topics
To learn more about the drive retention period and tiering cue, see:
You can edit the filesystem group policy according to your system requirements.
Procedure
From the menu, select Manage > Filesystems.
Select the filesystem group you want to edit.
Select the pencil sign right to the filesystem group name.
In the Edit Filesystem Group dialog, update the settings as you need. (See the parameter descriptions in the
Select Update.
You can delete a filesystem group no longer used by any filesystem.
Procedure
From the menu, select Manage > Filesystems.
Select the filesystem group you want to delete.
Verify that the filesystem group is not used by any filesystems (indicates 0 filesystems).
Select the Remove icon. In the pop-up message, select Yes to delete the filesystem group.
Mount a single WEKA client to multiple clusters simultaneously, optimizing data access and workload distribution.
Mounting filesystems from a single WEKA client to multiple clusters provides several advantages:
Expanded cluster connectivity: A single client can connect to up to seven clusters simultaneously, increasing storage capacity and computational capabilities.
Unified data access: Provides a consolidated view of data across multiple clusters, simplifying access and management while improving data availability, flexibility, and resource efficiency.
Optimized workload distribution: Enables efficient workload distribution across clusters, supporting scalable applications and enhancing overall performance.
Seamless integration: WEKA’s SCMC feature ensures smooth and efficient integration for clients accessing multiple clusters.
The bandwidth division in SCMC is a universal consideration based on the specific NIC's bandwidth. It applies across various NIC types, including those using DPDK or specific models like the X500-T1.
During SCMC mounts, each active connection can use the bandwidth available on its associated NIC port. This is true during peak usage and idle cases. In scenarios where NICs are dual-ported, each connection operates independently, leveraging its dedicated port.
When working with low-bandwidth NICs such as the X500-T1, a 10Gb/s NIC, consider bandwidth calculations. In the context of SCMC, each container (representing connectivity to a different cluster) uses half of the available bandwidth (5Gb/s) for a shared port. Note that a dual-port NIC has a dedicated port for each container, optimizing bandwidth distribution. Keep these factors in mind for an optimal SCMC setup.
When a stateless client mounts a filesystem in a cluster, it creates a client container with the same version as provided by the cluster. Because there may be situations where some of the clusters run a different WEKA version than the others, such as during an upgrade, it is required to set the same client target version on all clusters. The client target version is retained regardless of the cluster upgrade.
The client target version must be consistent across all clusters. It can match the cluster version or be one major version earlier (regardless the minor), provided that version is available in the cluster for client download.
To upgrade the cluster to a version higher than the first major release above the client version, see .
Connect to each cluster and run the following command to set the client target version.
Where: <version> is the designated client target version, which will be installed on the client container upon the mount command. Ensure this version is installed on the backend servers.
To display the existing client target version in the cluster, run the following command:
To reset the client target version to the cluster version, run the following command:
Use the same commands as with a single client.
To mount a stateless client using UDP mode, add -o net=udp -o core=<core-id> to the command line. For example:
For persistent client containers, the client-target-version parameter is not relevant. The version of the client container is determined when creating the container in the WEKA client using the weka local setup container command. Therefore, ensure that all client containers in the WEKA client have the same minor version as in the clusters.
To mount a persistent client container to a cluster, specify the container name for that mount.
When running WEKA CLI commands from a server hosting multiple client containers, each connected to a different WEKA cluster, it’s required to specify the client container port or the backend IP address/name of the cluster (linked to that client) in the command.
Consider a server with two client containers:
To run a WEKA CLI command on the second cluster (associated with client2), use either of the following methods:
By specifying the backend IP address or name linked to that client container (assuming the backend name is DataSphere2-1):
By specifying the client container port:
This approach ensures that your WEKA CLI command targets the correct WEKA cluster associated with the specified client container.
This page describes how to view and manage filesystem groups using the CLI.
Using the CLI, you can perform the following actions:
Command: weka fs group
Use this command to view information on the filesystem groups in the WEKA system.
Command: weka fs group create
Use the following command to add a filesystem group:
weka fs group create <name> [--target-ssd-retention=<target-ssd-retention>] [--start-demote=<start-demote>]
Parameters
Command: weka fs group update
Use the following command to edit a filesystem group:
weka fs group update <name> [--new-name=<new-name>] [--target-ssd-retention=<target-ssd-retention>] [--start-demote=<start-demote>]
Parameters
Command: weka fs group delete
Use the following command line to delete a filesystem group:
weka fs group delete <name>
Parameters
Related topics
To learn about the tiring policy, see:
Overview of WEKA's container-based architecture, where interconnected processes within server-hosted containers provide scalable and resilient storage services in a cluster.
In the WEKA system, servers operate as members of a cluster, with each server hosting multiple containers. These containers run software instances, referred to as processes, that collaborate and communicate within the cluster to deliver robust and efficient storage services. This architecture ensures scalability and fault tolerance by distributing storage functionality across interconnected containers.
The WEKA system uses different types of processes, each dedicated to specific functions:
Drive processes: Manage SSD drives and handle IO operations to drives. These processes are fundamental to storage operations and each requires a dedicated core to ensure optimal performance.
Compute processes: Handle filesystems, cluster-level functions, and IO from clients. The dedicated core requirement for each compute process ensures consistent processing power for these critical operations.
Frontend processes: Also known as client processes, manage POSIX client access and coordinate IO operations with compute and drive processes. Each frontend process needs a dedicated core to maintain responsive client interactions.
In the WEKA cluster, each server implements a multi-container backend architecture where containers are specialized by process type (drive, compute, or frontend).
Non-Disruptive Upgrade (NDU) capabilities:
Enables true non-disruptive upgrades where containers can run different software versions independently without system interruption
Supports individual container rollback without impacting cluster operations
Maintains continuous network control plane access throughout the upgrade process, ensuring uninterrupted client service
Total processes per cluster: 65,534 (includes all process types: management, drive, compute, and frontend)
Maximum backend processes: 25,000 (excludes frontend processes)
Maximum management processes: 32,767
Maximum drive processes: 62,244
Maximum WEKA cores per server: 64
Maximum cores per container: 19
This section provides detailed instructions on installing a WEKA system on AWS.
The WEKA® Data Platform on AWS provides a fast and scalable platform for running performance-intensive applications and hybrid cloud workflows.
WEKA provides a ready-to-deploy Terraform package that you can customize for installing the WEKA cluster on AWS. Optionally, you can install the WEKA cluster using the AWS CloudFormation.
Ensure you are familiar with the following concepts and services that are used for the WEKA installation on AWS:
WEKA provides a ready-to-deploy that you can customize to install the WEKA cluster on GCP.
The Terraform package contains the following modules:
setup_network: includes vpcs, subnets, peering, firewall, and health check.
service_account: includes the service account used for deployment with all necessary permissions.
The WEKA system enables file access through the NFS protocol instead of the WEKA client.
NFS (Network File System) is a protocol that enables clients to access the WEKA filesystem without requiring WEKA's client software. This leverages the standard NFS implementation of the client's operating system.
WEKA supports an advanced NFS implementation, NFS-W, designed to overcome inherent limitations in the NFS protocol. NFS-W is compatible with NFSv3 and NFSv4 protocols, offering enhanced capabilities, including support for more than 16 user security groups.
The legacy NFS stack is also available for backward compatibility, supporting only the NFSv3 protocol and a maximum of 16 user security groups.
This page describes how to manage quotas using the GUI.
Directory quotas monitor the filesystem capacity usage by a directory and allow restricting the amount of space used by the directory.
Using the GUI, you can:
This page details common errors that can occur when deploying WEKA in AWS using CloudFormation and what can be done to resolve them.
Using CloudFormation deployment saves a lot of potential errors that may occur during installation, especially in the configuration of security groups and other connectivity-related issues. However, the following errors related to the following subjects may occur during installation:
Installation logs
AWS account limits
Launch in placement group error
Instance type not supported in AZ
ClusterBootCondition timeout
Clients failed to join cluster
As explained in Self-Service Installation, each instance launched in a WEKA CloudFormation template starts by installing WEKA on itself. This is performed using a script named wekaio-instance-boot.sh and launched by cloud-init. All logs generated by this script are written to the instance’s Syslog.
Additionally, the CloudWatch Logs Agent is installed on each instance, dumping Syslog to CloudWatch under a log-group named/wekaio/<stack-name>. For example, if the stack is namedcluster1, a log-group named /wekaio/cluster1 should appear in CloudWatch a few moments after the template shows the instances have reached CREATE_COMPLETE state.
Under the log-group, there should be a log-stream for each instance Syslog matching the instance name in the CloudFormation template. For example, in a cluster with 6 backend instances, log-streams named Backend0-syslog through Backend5-syslog should be observed.
When deploying the stack, this error may be received in the description of a CREATE_FAILED event for one or more instances, indicating that more instances (N) have been requested than that permitted by the current instance limit of L for the specified instance type. To request an adjustment to this limit, go to aws.amazon.com to open a support case with AWS.
If the error Instance i-0a41ba7327062338e failed to stabilize. Current state: shutting-down. Reason: Server.InternalError: Internal error on launch is received, one of the instances was unable to start. This is an internal AWS error and it is necessary to try to deploy the stack again.
If the error We currently do not have sufficient capacity to launch all of the additional requested instances into Placement Group 'PG' is received, it was not possible to place all the requested instances in one placement-group.
The CloudFormation template creates all instances in one placement-group to guarantee best performance. Consequently, if the deployment fails with this error, try to deploy in another AZ.
If the error The requested configuration is currently not supported. Please check the documentation for supported configurations or Your requested instance type (T) is not supported in your requested Availability Zone (AZ). Please retry your request by not specifying an Availability Zone or choosing az1, az2, az3 is received, the instance type that you tried to provision is not supported in the specified AZ. Try selecting another subnet to deploy the cluster in, which will implicitly select another AZ.
When a ClusterBootCondition timeout occurs, there was a problem creating the initial WEKA system cluster. To debug this error, look in the Backend0-syslog log-stream (as described above). The first backend instance is responsible for creating the cluster and therefore, its log should provide the information necessary to debug this error.
When the message Clients failed to join for uniqueId: ClientN is received while in the WaitCondition, one of the clients was unable to join the cluster. Look at the Syslog of the client specified in uniqueId as described above.
Example: If the error message specifies that client 3 failed to join, a message ending with uniqueId: Client3 should be displayed. Look at the log-stream named Client3-syslog.
You can monitor the cluster instances by checking the cluster EC2 instances in the AWS EC2 service. You can set up Cloud Watch as external monitoring to the cluster.
Connecting to the WEKA cluster GUI provides a system dashboard where you can see if any component is not properly functioning and view system alerts, events, and statistics.
name*
Set a meaningful name for the filesystem group.
target-ssd-retention
The time for keeping data on the SSD after it is copied to the object store. After this period, the copy of the data is deleted from the SSD. Format: 3s, 2h, 4m, 1d, 1d5h, 1w.
1d
start-demote
The time to wait after the last update before the data is copied from the SSD and sent to the object store. Format: 3s, 2h, 4m, 1d, 1d5h, 1w.
10s
name*
Name of the filesystem group to edit. It must be a valid name.
new-name
New name for the filesystem group.
target-ssd-retention
The time for keeping data on the SSD after it is copied to the object store. After this period, the copy of the data is deleted from the SSD. Format: 3s, 2h, 4m, 1d, 1d5h, 1w.
start-demote
The time to wait after the last update before the data is copied from the SSD and sent to the object store. Format: 3s, 2h, 4m, 1d, 1d5h, 1w.
name*
Name of the filesystem group to delete
Command: weka fs tier s3 attach
To attach an object store to a filesystem, use the following command:
weka fs tier s3 attach <fs-name> <obs-name> [--mode mode]
Parameters
fs-name*
Name of the filesystem to attach with the object store.
obs-name*
Name of the object store to attach.
mode
The operational mode for the object store bucket. The possible values are:
writable: Local access for read/write operations.
remote: Read-only access for remote object stores.
writable
Command: weka fs tier s3 detach
To detach an object store from a filesystem, use the following command:
weka fs tier s3 detach <fs-name> <obs-name>
Parameters
fs-name*
Name of the filesystem to be detached from the object store
obs-name*
Name of the object store to be detached





Optimized hardware utilization:
Supports up to 64 WEKA cores per server
Multiple containers per process type
Flexible core allocation across containers
Up to 19 cores per container
Improved maintenance operations:
Selective process management
Ability to maintain drive processes while stopping compute and frontend processes




Asia Pacific (Tokyo)
ap-northeast-2
Asia Pacific (Seoul)
ap-southeast-1
Asia Pacific (Singapore)
ap-southeast-2
Asia Pacific (Sydney)
ca-central-1
Canada (Central)
eu-central-1
Europe (Frankfurt)
eu-north-1
Europe (Stockholm)
eu-west-1
Europe (Ireland)
eu-west-2
Europe (London)
sa-east-1
South America (São Paulo)
us-east-1
US East (N. Virginia)
us-east-2
US East (Ohio)
us-west-1
US West (N. California)
us-west-2
US West (Oregon)
ap-south-1
Asia Pacific (Mumbai)
ap-northeast-1
Drive Retention Period: Set the period to keep the data on the SSD after it is copied to the object store. After this period, the copy of the data is deleted from the SSD.
Tiering Cue: Set the time to wait after the last update before the data is copied from the SSD and sent to the object store.





smb_protocol_gateways_number = 3
smb_protocol_gateway_instance_type = "c5.2xlarge"
smbw_enabled = true
smb_domain_name = "CUSTOMER_DOMAIN"
smb_share_name = "SPECIFY_SMB_SHARE_NAMING"
smb_setup_protocol = truenfs_protocol_gateways_number = 1
nfs_protocol_gateway_instance_type = "c5.2xlarge"
nfs_setup_protocol = trueclients_number = 2
client_instance_type = "c5.2xlarge"weka cluster client-target-version set <version>weka cluster client-target-version showweka cluster client-target-version resetmount -t wekafs <backend-name> <fs-name> <mount-point> -o container_name=<container-name>mount -t wekafs backend-server-0/my_fs /mnt/weka -o net=udp -o core=2 -o container_name=frontend0mount -t wekafs <fs-name> <mount-point> -o container_name=<container-name>weka local ps
CONTAINER STATE DISABLED UPTIME MONITORING PERSISTENT PORT PID STATUS VERSION LAST FAILURE
client1 Running False 3:15:57h True False 14000 58318 Ready 4.2.18
client2 Running False 3:14:35h True False 14101 59529 Ready 4.2.18weka status -H DataSphere2-1weka status -P 14101elfutils
fio
git
hwloc
iperf
ipmitool
kexec-tools
jq
ldap-client
libaio-dev
lldpd
nfs-client
nload
nmap
numactl
nvme-cli
pdsh
python3
sshpass
sysstat
tmateRelated information
To install WEKA on AWS, an AWS account is required. Visit the AWS site to create an AWS account.
Related topics
Explore the two key technologies in network virtualization: VirtIO in DPDK mode and gVNIC in UDP mode. VirtIO in DPDK mode offers high-performance network interfaces in virtual machines, while gVNIC in UDP mode provides reliable, high-speed network connectivity.
A2
a2-highgpu-1g, a2-highgpu-2g, a2-highgpu-4g, a2-highgpu-8g, a2-megagpu-16g, a2-ultragpu-1g
C2
c2-standard-8, c2-standard-16
C2D
c2d-standard-4, c2d-standard-8, c2d-standard-16, c2d-standard-32, c2d-standard-56, c2d-standard-112, c2d-highmem-56
E2
e2-standard-4, e2-standard-8, e2-standard-16, e2-highmem-4, e2-highcpu-8
N2
n2-standard-4, n2-standard-8, n2-standard-16, n2-standard-32, n2-standard-48, n2-standard-96, n2-standard-128, n2-highmem-32
A2
a2-highgpu-1g, a2-highgpu-2g, a2-highgpu-4g, a2-highgpu-8g,
a2-megagpu-16g, a2-ultragpu-1g
A3
a3-highgpu-8g
C2
c2-standard-8, c2-standard-16, c2-standard-30, c2-standard-60
C2D
c2d-standard-4, c2d-standard-8, c2d-standard-16, c2d-standard-32, c2d-standard-56, c2d-standard-112, c2d-highmem-56
C3
c3-standard-4, c3-standard-8, c3-standard-22, c3-standard-44, c3-standard-88, c3-standard-176, c3-highcpu-4, c3-highcpu-8, c3-highcpu-22, c3-highcpu-44, c3-highcpu-88, c3-highcpu-176, c3-highmem-4, c3-highmem-8, c3-highmem-22, c3-highmem-44, c3-highmem-88, c3-highmem-176, c3-standard-4-lssd, c3-standard-8-lssd, c3-standard-22-lssd, c3-standard-44-lssd, c3-standard-88-lssd, c3-standard-176-lssd
C3D
c3d-standard-4, c3d-standard-8, c3d-standard-16, c3d-standard-30, c3d-standard-60, c3d-standard-90, c3d-standard-180, c3d-standard-360, c3d-highcpu-4, c3d-highcpu-8, c3d-highcpu-16, c3d-highcpu-30, c3d-highcpu-60, c3d-highcpu-90, c3d-highcpu-180, c3d-highcpu-360, c3d-highmem-4, c3d-highmem-8, c3d-highmem-16, c3d-highmem-30, c3d-highmem-60, c3d-highmem-90, c3d-highmem-180, c3d-highmem-360, c3d-standard-8-lssd, c3d-standard-16-lssd, c3d-standard-30-lssd, c3d-standard-60-lssd, c3d-standard-90-lssd, c3d-standard-180-lssd, c3d-standard-360-lssd, c3d-highmem-8-lssd, c3d-highmem-16-lssd, c3d-highmem-30-lssd, c3d-highmem-60-lssd, c3d-highmem-90-lssd, c3d-highmem-180-lssd, c3d-highmem-360-lssd
Related information
C2
c2-standard-8, c2-standard-16
deploy_weka: includes the actual WEKA deployment, instance template, cloud functions, workflows, job schedulers, secret manager, buckets, and health check.
shared_vpcs (optional): includes VPC sharing the WEKA deployment network with another hosting project. For example, when deploying a private network.
The Terraform package supports the following deployment types:
Public cloud deployments: Require passing the get.weka.io token to Terraform for downloading the WEKA release from the public get.weka.io service. The following examples are provided:
Public VPC
Public VPC with creating a worker pool
Public VPC with an existing public network
Public VPC with multiple clusters
Public VPC with a shared VPC
Public VPC with an existing worker pool and VPC
Private cloud deployments: Require uploading the WEKA release tar file into the yum repository (instances can download the WEKA release from this yum repository). The following examples are provided:
Private VPC with creating a worker pool
Private VPC with an existing network
The following is a basic example in which you provide the minimum detail of your cluster, and the Terraform module completes the remaining required resources, such as cluster size, machine type, and networking parameters.
You can use this example as a reference to create the main.tf file.
To deploy a private network, the parameter private_network = true on the setup_network and deploy_weka modules level.
Depending on the required network topology, the following parameters are optional for private networking:
To download the WEKA release from a local bucket, set the local bucket location in the install_url parameter on the deploy_weka module level.
For Centos7 only, a distributive repository is required to download kernel headers and additional build software. To auto-configure yum to use a distributive repository, run yum_repo_server.
If a custom image is required, use weka_image_id.
The Terraform package can automate the addition of a Google Cloud Storage bucket for use as object storage.
Procedure
In the main.tf file, add the following fields:
tiering_enable_obs_integration: Set the value to true.
tiering_obs_name: Match the value to an existing bucket in Google Cloud Storage.
tiering_ssd_percent: Set the percentage to your desired value.
Example:
Using port 14000 and the URL /api/v2.
By browsing to: https://<cluster name>:14000/api/v2/docs
Select the three dots on the upper right menu and select REST API.
Browse to api.docs.weka.io and select the REST API version from the definition selector.
In addition, you can create a client code using the OpenAPI client generator and the .json file.
To use the WEKA REST API, provide an access or refresh token.
You can generate an access or refresh for the REST API usage through the CLI or the GUI. SeeObtain authentication tokens.
You can also call the login API to obtain access or refresh tokens through the API, providing it with a username and password.
If you already obtained a refresh token, you can use the login/refresh API to refresh the access token.
The response includes the access token (valid for 5 minutes) to use in the other APIs requiring token authentication, along with the refresh token (valid for 1 year), for getting additional access tokens without using the username/password.
Once you obtain an access token, you can call WEKA REST API commands with it. For example, you can query the cluster status:
Adhere to the following guidelines and requirements when deploying the NFS service.
NFSv4 requires a persistent cluster-wide configuration filesystem for the protocol's internal operations. See Additional protocol containers.
An interface group is a configuration framework designed to optimize resiliency among NFS servers. It enables the seamless migration of IP addresses, known as floating IPs, from an unhealthy server to a healthy one, ensuring continuous and uninterrupted service availability.
An interface group consists of the following:
A collection of WEKA servers with a network port for each server, where all the ports must be associated with the same subnets. For resiliency, a minimum of two NFS servers are required.
A collection of floating IPs to support the NFS protocol on specified servers and NICs. All IP addresses are required to be within the same subnet, and the servers must already have static IP addresses on those NICs within that subnet.
A routing configuration for the IPs. The IP addresses must comply with the IP network configuration.
An interface group can have only a single port. Therefore, two interface groups are required to support High Availability (HA) in NFS. Consider the network topology when assigning the other server ports to these interface groups to ensure no single point of failure exists in the switch.
You can define up to 10 different Interface groups. Use multiple interface groups if the cluster connects to multiple subnets. You can set up to 50 servers in each interface group.
The WEKA system automatically distributes the IP addresses evenly on each server and port. If a server fails, the WEKA system redistributes the IP addresses associated with the failed server to other servers.
The WEKA system automatically configures the floating IP addresses used by the NFS service on the appropriate server. Refrain from manually configuring or using the floating IP.
To ensure load balancing between the NFS clients on the different WEKA servers serving NFS, it is recommended to configure a round-robin DNS entry that resolves to the list of floating IPs.
Related information
The NFS client mount is configured using the standard NFS stack operating system. The NFS server IP address must point to the round-robin DNS name.
The NFS client permission groups are defined to control the access mapping between the servers and the filesystems. Each NFS client permission group contains the following:
A list of filters for IP addresses or DNS names of clients that can be connected to the WEKA system by NFS.
A collection of rules that control access to specific filesystems.
To allow for performance scalability, add as many servers as possible to the interface group.
Floating IPs facilitate load balancing by evenly distributing them across all interface group servers and ports, given the system has 50 or fewer NFS interfaces. However, with the limitation of 50 floating IPs per cluster, systems with more than 50 NFS interfaces may not have a floating IP for each interface.
When different clients resolve the DNS name into an IP service, each receives a different IP address, ensuring that other clients access different servers. This allows the WEKA system to scale and service thousands of clients.
To ensure the resilience of the service if a server fails, all IP addresses associated with the failed server are reassigned to other servers (using the GARP network messages), and the clients reconnect to the new servers without any reconfiguration or service interruption.
For detailed procedures, see the related topics.
Related topics
You can view existing directory quotas and the default quota that are already set.
Procedure
From the menu, select Manage > Directory Quotas.
Select the relevant tab: Directory Quotas or Default Directories Quota.
Select the filesystem in which the directory quotas are already set (through the CLI).
To view all quotas or only the exceeding quotas, select the Exceeding quotas/All quotas switch.
You can update an existing directory quota or the default quota for directories. Updating the default quota only applies to new directories.
Procedure
From the menu, select Manage > Directory Quotas.
Select the relevant tab: Directory Quotas or Default Directories Quota.
Select the filesystem in which the directory quotas are set (through the CLI).
Select the three dots on the right of the required directory. From the menu, select Update.
In the Quota Settings Update dialog, modify the following settings according to your needs:
Hard Quota Limit: The hard quota limit defines the maximum used capacity above the soft quota limit, which prevents writing to the directory.
Soft Quota Limit: The soft quota limit defines the maximum used capacity that triggers a grace period timer. Data can be written to the directory until the grace period ends or the hard quota limit is reached.
Owner: The directory’s owner, such as user name, email, or slack ID (up to 48 characters).
Grace Period: A grace period starts when the soft quota limit is reached. After this period, data cannot be written to the directory.
Click Save.
You can remove the default quota settings for new directories created in a specific filesystem. The quota of existing directories is not affected.
Procedure
From the menu, select Manage > Directory Quotas.
Select the Default Directories Quota tab.
Select the filesystem in which the default quotas are already set (through the CLI).
Select the three dots on the right of the required default quota. From the menu, select Remove.
In the Default Quota Deletion message, select Yes.

This topic provides an overview of the automated tools and workflow paths for installing and configuring the WEKA software on a group of bare metal servers (on-premises environment).
WEKA provides a variety of tools for automating the WEKA software installation process. These include:
WEKA Management Station (WMS)
WEKA Software Appliance (WSA)
WEKA Configurator
WMS can be used to speed the WEKA Software Appliance (WSA) deployment on the supported bare metal servers: Dell, HPE, Lenovo, and Supermicro.
This is the preferred installation method, the simplest and fastest method to get from bare metal to a working WEKA cluster. If you cannot meet the prerequisites for deploying WMS, use the WSA package.
WSA is a WEKA server image deployed with a preconfigured operating system. This method significantly speeds up the OS and WEKA cluster installation and provides a WEKA-supported operating environment.
After installation, the server is in STEM mode, which is the initial state before the configuration.
If you cannot use the WSA for WEKA cluster installation, review the requirements and follow the instructions for deploying WEKA using the WEKA Configurator.
The WEKA Configurator automatically generates the WEKA Cluster configurations (config.sh) to apply on the cluster servers.
The following illustrates a high-level deployment workflow on a group of bare metal servers.
The following summarizes the three workflow paths to install and configure the WEKA cluster.
Path A: Automated with WMS and WSA
Path B: Automated with WSA only
Path C: Manual installation and configuration
Select the path applicable to your needs.
This method is the most preferable option to install the WEKA cluster assuming the prerequisites are met. For example, the bare metal servers are Dell, HPE, Lenovo or Supermicro, the OS (Rocky 8.6) meets your needs, and a physical server is available for installing the WMS.
If the OS (Rocky 8.6) meets your needs but the bare-metal servers are not HPe, Dell, or Supermicro, this is the second preferred option to install and configure the WEKA cluster.
Manually install and configure the WEKA cluster if:
(all paths)
During the deployment of the WEKA system, the EC2 instances require access to the internet to download the WEKA software. For this reason, you need to deploy the WEKA system in one of the following deployment types in AWS:
Public subnet: Use a public subnet within your VPC with an internet gateway, and allow public IP addresses for your instances.
Private subnet with NAT Gateway: Create a private subnet with a route to a NAT gateway with an elastic IP in the public subnet.
Private subnet using WEKA VPC endpoint: Requires the creation of a (once per VPC) that creates the necessary resources.
Private subnet using custom proxy: Requires the creation of a (once per VPC) that creates the necessary resources.
The following diagrams illustrate the components of the public subnet and private subnet with NAT gateway deployment types in AWS.
By default, AWS does not provide enough vCPUs to install a WEKA system. Use the Limits Calculator for your region from the AWS EC2 dashboard.
Procedure
On the AWS EC2 dashboard, select the Limits option from the left menu.
2. In the Limits Calculator, do the following:
In the Current Limit, set the number of vCPUs you currently have for a region.
In the vCPUs needed, set the required number of vCPUs for your specific deployment.
Select the Request on-demand limit increase link to get more vCPUs.
The following example shows the required vCPUs for a six servers cluster with two clients of type i3en.2xlarge. This example is the smallest type of instance for a WEKA system deployment.
The Weka system is a distributed cluster protected from 2 or 4 failure domain failures, providing fast rebuild times. For details, see the section.
If an instance failure occurs, the Weka system rebuilds the data. Add a new instance to the cluster to regain the reduced compute and storage due to the instance failure.
It is advisable to use periodic (incremental) snapshots to back up the data and protect it from multiple EC2 instances failures.
The recovery point objective (RPO) is determined by the cadence in which the snapshots are taken and uploaded to S3. The RPO changes between the type of data, regulations, and company policies, but it is advisable to upload at least daily snapshots of the critical filesystems. For details, see the section.
If a failure occurs and it is required to recover from a backup, spin up a cluster using the or , and create filesystems from those snapshots. You do not need to wait for the data to reach the EC2 volumes. It is instantly accessible through S3. The recovery time objective (RTO) for this operation mainly depends on the time it takes to deploy the and is typically below 30 min.
See the section.
Using Weka snapshots uploaded to S3 combined with S3 cross-region replication enables protection from an AWS region failure.
For security reasons, it is advisable to rotate the SSH keys used for the EC2 instances.
To rotate the SSH keys, follow these steps:
, and
.
Related topic
In a WEKA cluster, the frontend container provides the default POSIX protocol, serving as the primary access point for the distributed filesystem. You can also define protocol containers for NFS, SMB, and S3 clients.
To configure protocol containers, you have two options for creating a cluster for the specified protocol:
Set up protocol services on existing backend servers.
Prepare additional dedicated servers for the protocol containers.
It is required to have a dedicated filesystem that stores persistent protocol configurations. This filesystem is essential for coordinating coherent simultaneous access to files from multiple servers. It is advisable to assign a meaningful name to this configuration filesystem, such as .config_fs. Set the total capacity to 100 GB and avoid additional options like tiering and thin-provisioning.
With this option, you configure the existing cluster to provide the required protocol containers. The following topics guide you through the configuration for each protocol:
Using dedicated protocol servers enhances the cluster's capabilities and addresses diverse use cases. Each dedicated protocol server in the cluster can host one of these additional protocol containers alongside the existing frontend container.
These dedicated protocol servers function as complete and permanent members of the WEKA cluster. They run essential processes to access WEKA filesystems and incorporate switches supporting the protocols.
Dedicated protocol servers offer the following advantages:
Optimized performance: Leverage dedicated CPU resources for tailored and efficient performance, optimizing overall resource usage.
Independent protocol scaling: Scale specific protocols independently, mitigating resource contention and ensuring consistent performance across the cluster.
Procedure
Install the WEKA software on the dedicated protocol servers: Do one of the following:
Follow the default method as specified in .
Use the WEKA agent to install from a working backend. The following commands demonstrate this method:
Check the dedicated protocol servers: The servers join the cluster and can be verified using the command:
With dedicated protocol servers in place, the next step is to manage individual protocols.
Related topics
This page describes the system behavior when tiering, accessing or deleting data in tiered filesystems.
In tiered filesystems, the WEKA system optimizes storage efficiency and manages storage resources effectively by:
Tiering only infrequently accessed portions of files (warm data), keeping hot data on SSDs.
Efficiently bundling subsets of different files (to 64 MB objects) and tiering them to object stores, resulting in significant performance enhancements.
Retrieving only the necessary data from the object store when accessing it, regardless of the entire object it was originally tiered with.
Reclaiming logically freed data occurs when data is modified or deleted and is not used by any snapshots. Reclamation is a process of freeing up storage space that was previously allocated to data that is no longer needed.
For logically freed data that resides on the SSD, the WEKA system immediately deletes the data from the SSD, leaving the physical space reclamation for the SSD erasure technique.
Object store space reclamation is an important process that efficiently manages data stored on object storage.
WEKA organizes files into 64 MB objects for tiering. Each object can contain data from multiple files. Files smaller than 1 MB are consolidated into a single 64 MB object. For larger files, their parts are distributed across multiple objects. As a result, when a file is deleted (or updated and is not used by any snapshots), the space within one or more objects is marked as available for reclamation. However, the deletion of these objects only occurs under specific conditions.
Deleting related objects happens when all associated files are deleted, allowing for complete space reclamation within the object or during the reclamation process. Reclamation entails reading an eligible object from object storage and packing the active portions (representing data from undeleted files) with sections from other files that must be written to the object store. The resulting object is then written back to the object store, freeing up reclaimed space.
WEKA automates the reclamation process by monitoring the filesystems. When the reclaimable space within a filesystem exceeds 13%, the reclamation process begins. It continues until the total reclaimable space drops below 7%. This mechanism prevents write amplifications, allows time for higher portions of eligible 64 MB objects to become logically free, and prevents unnecessary object storage workload for small space reclamation. It's important to note that reclamation is only executed for objects with reclaimable space exceeding 5% within that object.
To calculate the amount of space that can be reclaimed, consider the following examples:
If we write 1 TB of data, and 15% of that space can be reclaimed, we have 150 GB of reclaimable space.
If we write 10 TB of data, and 5% of that space can be reclaimed, we have 500 GB of reclaimable space.
For regular filesystems where files are frequently deleted or updated, this behavior can result in the consumption of 7% to 13% more object store space than initially expected based on the total size of all files written to that filesystem. When planning object storage capacity or configuring usage alerts, it's essential to account for this additional space. Remember that this percentage may increase during periods of high object store usage or when data/snapshots are frequently deleted. Over time, it will return to the normal threshold as the load/burst is reduced.
If the filesystem was created from a snapshot, only the data uploaded to the object store after the new filesystem was created can be reclaimed. Pre-existing data from the original snapshot is unreclaimable. To ensure all data is reclaimable, migrate the restored filesystem to a new bucket. For details, see .
Run the weka fs tier capacity command to retrieve a comprehensive listing of data capacities associated with object store buckets per filesystem.
If the filesystem was created from an uploaded snapshot, data from the original filesystem is not accounted for in the displayed capacity.
Example:
To list the data capacities of a specific filesystem, add the option --filesystem <filesystem name>.
Example:
When WEKA uploads objects to the object store, it assigns tags to categorize them. These tags are crucial because they enable the customer to implement specific lifecycle management rules in the object store based on the assigned tags.
For example, you can transfer objects of a specific filesystem when interacting with .
To enable upload tags, set it when adding or updating the object store bucket. For details, see the following:
Using the GUI:
, or
by selecting Enable Upload Tags in the Advanced section.
Using the CLI:
The following table indicates the additional tags WEKA adds to the object when using object tagging:
The object store must support S3 object-tagging and might require additional permissions to use object tagging.
For example, the following extra permissions are required in AWS S3:
s3:PutObjectTagging
s3:DeleteObjectTagging
This page describes the Snap-To-Object feature, which enables the committing of all the data of a specific snapshot to an object store.
Using the GUI, you can:
Related topics
To learn about how to view, create, update, delete, and restore snapshots, see .
You can upload a snapshot to a local, remote, or both object store buckets.
Procedure
From the menu, select Manage > Snapshots.
Select the three dots on the right of the required snapshot. From the menu, select Upload To Object Store.
A relevant message appears if a local or remote object store bucket is not attached to the filesystem. It enables opening a dialog to select an object store bucket and attach it to the filesystem. To add an object store, select Yes.
In the Attach Object Store to Filesystem dialog, select the object store bucket to attach the snapshot.
Select Save. The snapshot is uploaded to the target object store bucket.
Copy the snapshot locator:
Select the three dots on the right of the required snapshot, and select Copy Locator to Clipboard.
Related topics
You can create (or recreate) a filesystem from an uploaded snapshot, for example, when you need to migrate the filesystem data from one cluster to another.
When recreating a filesystem from a snapshot, adhere to the following guidelines:
Pay attention to upload and download costs: Due to the bandwidth characteristics and potential costs when interacting with remote object stores, it is not allowed to download a filesystem from a remote object store bucket. If a snapshot on a local object store bucket exists, it is advisable to use that one. Otherwise, follow the procedure in the topic using the CLI.
Use the same KMS master key: For an encrypted filesystem, to decrypt the snapshot data, use the same KMS master key as used in the encrypted filesystem. See the topic.
Before you begin
Verify that the locator of the required snapshot (from the source cluster) is available (see the last step in the procedure).
Ensure the object store is attached to the destination cluster.
Procedure
Connect to the destination cluster where you want to create the filesystem.
From the menu, select Manage > Filesystems, and select +Create.
In the Create Filesystem, do the following:
Related topics
The Synchronous Snap feature, which allows incremental snapshots to be downloaded from an object store, was temporarily disabled in version 4.2.3. It has been re-enabled in version 4.3.0.
Explore the principles for data lifecycle management and how data storage is managed in SSD-only and tiered WEKA system configurations.
elfutils-libelf-devel
gcc
glibc-headers
glibc-devel
make
perl
rpcbind
xfsprogs
kernel-devel
sssdlibelf-dev
linux-headers-$(uname -r)
gcc
make
perl
python2-minimal
rpcbind
xfsprogs
sssd
provider "google" {
region = "europe-west1"
project = "PROJECT_ID"
}
module "weka_deployment" {
source = "weka/weka/gcp"
version = "4.0.0"
cluster_name = "my_cluster_name"
project_id = "PROJECT_ID"
prefix = "my_prefix"
region = "europe-west1"
zone = "europe-west1-b"
cluster_size = 7
nvmes_number = 2
get_weka_io_token = "getwekatoken"
machine_type = "c2-standard-8"
subnets_range = ["10.222.0.0/24", "10.223.0.0/24", "10.224.0.0/24", "10.225.0.0/24"]
allow_ssh_cidrs = ["0.0.0.0/0"]
allow_weka_api_cidrs = ["0.0.0.0/0"]
}
output "weka_cluster" {
value = module.weka_deployment
}tiering_enable_obs_integration=true
tiering_obs_name="myBucketName"
tiering_ssd_percent=20import requests
url = "https://weka01:14000/api/v2/login"
payload="{\n \"username\": \"admin\",\n \"password\": \"admin\"\n}"
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
import requests
url = "https://weka01:14000/api/v2/login/refresh"
payload="{\n \"refresh_token\": \"REPLACE-WITH-REFRESH-TOKEN\"\n}"
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
{
"data": [
{
"access_token": "ACCESS-TOKEN",
"token_type": "Bearer",
"expires_in": 300,
"refresh_token": "REFRESH-TOKEN"
}
]
}import requests
url = "https://weka01:14000/api/v2/cluster"
payload={}
headers = {
'Authorization': 'Bearer REPLACE-WITH-ACCESS-TOKEN'
}
response = requests.request("GET", url, headers=headers, data=payload)
print(response.text)
Private VPC with multiple clusters
Private VPC with a shared VPC
N2D
n2d-standard-32, n2d-standard-64, n2d-highmem-32, n2d-highmem-64
G2
g2-standard-4, g2-standard-8, g2-standard-12, g2-standard-16, g2-standard-24, g2-standard-32, g2-standard-48, g2-standard-96
M3
m3-ultramem-32, m3-ultramem-64, m3-ultramem-128, m3-megamem-64, m3-megamem-128
N2
n2-standard-4, n2-standard-8, n2-standard-16, n2-standard-32, n2-standard-48, n2-standard-64, n2-standard-80, n2-standard-96, n2-standard-128, n2-highmem-4, n2-highmem-8, n2-highmem-16, n2-highmem-32, n2-highmem-48, n2-highmem-64, n2-highmem-80, n2-highmem-96, n2-highmem-128, n2-highcpu-8, n2-highcpu-16, n2-highcpu-32, n2-highcpu-48, n2-highcpu-64, n2-highcpu-80, n2-highcpu-96
N2D
n2d-standard-4, n2d-standard-8, n2d-standard-16, n2d-standard-32, n2d-standard-48, n2d-standard-64, n2d-standard-80, n2d-standard-96, n2d-standard-224, n2d-highmem-4, n2d-highmem-8, n2d-highmem-16, n2d-highmem-32, n2d-highmem-48, n2d-highmem-64, n2d-highmem-80, n2d-highmem-96, n2d-highcpu-8, n2d-highcpu-16, n2d-highcpu-32, n2d-highcpu-48, n2d-highcpu-64, n2d-highcpu-80, n2d-highcpu-96, n2d-highcpu-128, n2d-highcpu-224
N4
n4-standard-4, n4-standard-8, n4-standard-16, n4-standard-32, n4-standard-48, n4-standard-64, n4-standard-80, n4-highcpu-4, n4-highcpu-8, n4-highcpu-16, n4-highcpu-32, n4-highcpu-48, n4-highcpu-64, n4-highcpu-80, n4-highmem-4, n4-highmem-8, n4-highmem-16, n4-highmem-32, n4-highmem-48, n4-highmem-64, n4-highmem-80













Sometimes, it's necessary to access previously-tiered files quickly. In such situations, you can request the WEKA system to fetch the files back to the SSD without accessing them directly.
Command: weka fs tier fetch
Use the following command to fetch files:
weka fs tier fetch <path> [-v]
Parameters
path*
A comma-separated list of file paths.
-v, --verbose
Showing fetch requests as they are submitted.
Off
To fetch a directory that contains a large number of files, it is recommended to use the xargs command in a similar manner as follows:
To ensure effective fetch, adhere to the following:
Free SSD capacity: The SSD has sufficient free capacity to retain the fetched filesystems.
Tiering policy: The tiering policy may release some of the files back to the object store after they have been fetched, or during the fetch if it takes longer than expected. The tiering policy must be long enough to allow for the fetch to complete and the data to be accessed before it is released again.
Using the manual release command, it is possible to clear SSD space in advance (e.g., for shrinking one filesystem SSD capacity for a different filesystem without releasing important data, or for a job that needs more SSDs space from different files). The metadata will still remain on SSD for fast traversal over files and directories but the data will be marked for release and will be released to the object store as soon as possible, and before any other files are scheduled to release due to other lifecycle policies.
Command: weka fs tier release [-v]
Use the following command to release files:
weka fs tier release <path>
Parameters
path*
A comma-separated list of file paths.
-v, --verbose
Showing release requests as they are submitted
Off
To release a directory that contains a large number of files, it is recommended to use the xargs command in a similar manner, as follows:
Depending on the retention period in the tiering policy, files can be found on the object store or the SSD or both locations as follows:
Before the file is tiered to the object store, it is found in the SSD.
During data tiering, the tiered data is on the SSD (read cache) and the object store.
Once the entire file data is tiered and the retention period has past, the complete file is found in the object store only.
Use this command to find the file location during the data lifecycle operations.
Command: weka fs tier location
Use the following command to find files:
weka fs tier location <path>
For multiple paths, use the following command:
weka fs tier location <paths>
To find all files in a single directory, use the following command:
weka fs tier location *
Parameters
path*
A path to get information about.
paths
Space-separated list of paths to get information about.
Before the file named image is tiered to the object store, it is found in the SSD (WRITE-CACHE).
The file is tiered and the retention period has not past yet, so the file is found in the SSD (READ-CACHE) and the object store.
The file is tiered and the retention period past, so the file is found in the object store only.
find -L <directory path> -type f | xargs -r -n512 -P64 weka fs tier fetch -v# directory
find -L <directory path> -type f | xargs -r -n512 -P64 weka fs tier release
# similarly, a file containing a list of paths can be used
cat file-list | xargs -P32 -n200 weka fs tier release[root@kenny-0 weka] 2023-07-13 14:57:11 $ weka fs tier location image
PATH FILE TYPE FILE SIZE CAPACITY IN SSD (WRITE-CACHE) CAPACITY IN SSD (READ-CACHE) CAPACITY IN OBJECT STORAGE CAPACITY IN REMOTE STORAGE
image regular 102.39 MB 102.39 MB 0 B 0 B 0 B
[root@kenny-0 weka] 2023-07-13 14:58:14 $ weka fs tier location image
PATH FILE TYPE FILE SIZE CAPACITY IN SSD (WRITE-CACHE) CAPACITY IN SSD (READ-CACHE) CAPACITY IN OBJECT STORAGE CAPACITY IN REMOTE STORAGE
image regular 102.39 MB 0 B 102.39 MB 102.39 MB 0 B
[root@kenny-0 weka] 2023-07-13 14:59:14 $ weka fs tier location image
PATH FILE TYPE FILE SIZE CAPACITY IN SSD (WRITE-CACHE) CAPACITY IN SSD (READ-CACHE) CAPACITY IN OBJECT STORAGE CAPACITY IN REMOTE STORAGE
image regular 102.39 MB 0 B 0 B 102.39 MB 0 B
The bare metal servers are not HPe, Dell, or Supermicro, or
You want to use a different OS than Rocky 8.6, or
You need special customization, where you cannot use the WEKA Configurator.
Can we choose the number of cores and containers to use?
Yes. During post-install configuration. See Configure a WEKA cluster with the WEKA Configurator.
Will the ISO setup mirror RAID on the dual-boot SSDs?
Yes, automatically.
Can I set up WEKA with 8 SSDs per node even though I have 12 installed?
Not automatically. Pull the drives or manually adjust the configuration before running it (edit the config.sh output from wekaconfig).
What must be done to direct the ISO to set up for High Availability (HA)? How about no HA?
That’s determined in wekaconfig.
If there are multiple NIC cards (for WEKA and Ceph), how to choose the NICs to use for the WEKA backend server?
The WSA is not intended for that configuration directly. However, if you make them different subnets or networks, you can select the subnet to use. one, the other, or both.
With the ISO, are there different licensing processes? Or is it the standard to get cluster GUID and storage size and input it into the Weka webpage to get a license key and then input that key on the command prompt?
Licensing has not changed.
Does the ISO set up the IP address for Admin or the high-speed WEKA backend network?
The WMS will do that when it deploys the WSA.
What needs to be passed in to configure Ethernet or Infiniband?
Select the network type from the list in WMS.
Can all the parameters the ISO needs be in the script?
No. We use Ansible after installation to make the settings.
How do you use the kickstart file in the ISO?
Use the WMS. The kickstart file was written to work with WMS.
What additional settings must be configured on WEKA after the ISO installation?
There are no required settings that need to be manually set if you use the WMS.



--net=eth1/192.168.114.XXX/24Ensure adequate network interfaces are available on your dedicated protocol servers, particularly if you intend to dedicate NICs to WEKA. This precaution ensures a smooth and optimized configuration aligning with WEKA's performance recommendations.



The starting point for the reclamation process differs in each example. In example 1, reclamation begins at 130 GB (13%), while in example 2, it doesn't start. This is important to note because even though there is more total reclaimable space in example 2, the process starts later.
, or
by setting the enable-upload-tags parameter in weka fs tier s3 add/update commands.
wekaBlobType
The WEKA-internal type representation of the object.
Possible values:
DATA, METADATA, METAMETADATA, LOCATOR, RELOCATIONS
wekaFsId
A unique filesystem ID (a combination of the filesystem ID and the cluster GUID).
wekaGuid
The cluster GUID.
wekaFsName
The filesystem name that uploaded this object.

Select Create From Uploaded Snapshot (it only appears when you select Tiering). Paste the copied snapshot locator in the Object Store Bucket Locator (from the source cluster). In the Snapshot Name, set a meaningful snapshot name to override the default (uploaded snapshot name). In the Access Point, set a meaningful access point name to override the default (uploaded access point name) for the directory that serves as the snapshot's access point.
Select Save.





On object-store systems external to the WEKA system are third-party solutions, cloud services, or part of the WEKA system.
The WEKA system can be configured as an SSD-only or data management system consisting of SSDs and object stores. By nature, SSDs provide high performance and low latency storage, while object stores compromise performance and latency but are the most cost-effective solution available for storage.
Consequently, users focused on high performance only must consider using an SSD-only WEKA system configuration, while users seeking to balance performance and cost must consider a tiered data management system, with the assurance that the WEKA system features control the allocation of hot data on SSDs and warm data on object stores, thereby optimizing the overall user experience and budget.
In tiered WEKA system configurations, there are various locations for data storage as follows:
Metadata is stored only on SSDs.
Writing new files, adding data to existing files, or modifying the content of files is permanently terminated on the SSD, irrespective of whether the file is stored on the SSD or tiered to an object store.
When reading the content of a file, data can be accessed from either the SSD (if it is available on the SSD) or promoted from the object store (if it is not available on the SSD).
This data management approach to data storage on one of two possible media requires system planning to ensure that the most commonly used data (hot data) resides on the SSD to ensure high performance. In contrast, less-used data (warm data) is stored on the object store.
In the WEKA system, this determination of the data storage media is an entirely seamless, automatic, and transparent process, with users and applications unaware of the transfer of data from SSDs to object stores or from object stores to SSDs.
The data is always accessible through the same strongly-consistent POSIX filesystem API, irrespective of where it is stored. The actual storage media affects only latency, throughput, and IOPS.
Furthermore, the WEKA system tiers data into chunks rather than complete files. This enables the intelligent tiering of subsets of a file (and not only complete files) between SSDs and object stores.
The network resources allocated to the object store connections can be controlled. This enables cost control when using cloud-based object storage services since the cost of data stored in the cloud depends on the quantity stored and the number of requests for access made.
Data management represents the media being used for the storage of data. In tiered WEKA system configurations, data can exist in one of three possible states:
SSD-only: When data is created, it exists only on the SSDs.
SSD-cached: A tiered copy of the data exists on both the SSD and the object store.
Object store only: Data resides only on the object store.
The data lifecycle flow diagram describes the progression of data through various stages:
Tiering: This process involves data migration from the SSD to the object store, creating a duplicate copy. The criteria for this transition are governed by a user-specified, temporal policy known as the Tiering Cue.
Releasing: This stage entails removing data from the SSD and retaining only the copy in the object store. The need for additional SSD storage space typically triggers this action. The guidelines for this data release are dictated by a user-defined time-based policy referred to as the Retention Period.
Promoting: This final stage involves transferring data from the object store to the SSD to facilitate data access.
Accessing data solely on the object store must first be promoted back to the SSD. This ensures that the data is readily accessible for reading.
Within the WEKA system, file modifications are not executed as in-place writes. Instead, they are written to a new area on the SSD, and the corresponding metadata is updated accordingly. As a result, write operations are never linked with operations on the object store. This approach ensures data integrity and efficient use of storage resources.
All writing in the WEKA system is performed on SSDs. The data residing on SSDs is hot (meaning it is currently in use). In tiered WEKA configurations, SSDs have three primary roles in accelerating performance: metadata processing, a staging area for writing, and a cache for reading performance.
Since filesystem metadata is, by nature, a large number of update operations, each with a small number of bytes, the embedding of metadata on SSDs accelerates file operations in the WEKA system.
Since writing directly to an object store demands high latency levels while waiting for approval for the data to be written, with the WEKA system, there is no writing directly to object stores. Much faster writing is performed directly to the SSDs, with very low latency and much better performance. Consequently, in the WEKA system, the SSDs serve as a staging area, providing a buffer that is big enough for writing until the later data tiering to the object store. Upon completion of writing, the WEKA system is responsible for tiering the data to the object store and releasing it from the SSD.
Recently accessed or modified data is stored on SSDs, and most read operations are of such data and served from SSDs. This is based on a single, significant LRU clearing policy for the cache that ensures optimal read performance.
The WEKA system includes user-defined policies that serve as guidelines to control data storage management. They are derived from several factors:
The rate at which data is written to the system and the quantity of data.
The capacity of the SSDs configured to the WEKA system.
The network speed between the WEKA system and the object store and its performance capabilities, e.g., how much the object store can contain.
Filesystem groups are used to define these policies, while a filesystem is placed in a filesystem group according to the desired policy if the filesystem is tiered.
For tiered filesystems, define the following parameters per filesystem:
The size of the filesystem.
The amount of filesystem data to be stored on the SSD.
Define the following parameters per filesystem group:
The Drive Retention Period Policy is a time-based policy which is the target time for data to be stored on an SSD after creation, modification, or access, and before release from the SSD, even if it is already tiered to the object store, for metadata processing and SSD caching purposes (this is only a target; the actual release schedule depends on the amount of available space).
The Tiering Cue Policy is a time-based policy that determines the minimum time that data remains on an SSD before it is considered for release to the object store. As a rule of thumb, this must be configured to a third of the Retention Period, and in most cases, this works well. The Tiering Cue is important because it is pointless to tier a file about to be modified or deleted from the object store.
Example
When writing log files that are processed every month but retained forever, it is recommended to define a Retention Period of one month, a Tiering Cue of one day, and ensure sufficient SSD capacity to hold one month of log files.
When storing genomic data, which is frequently accessed during the first three months after creation, requires a scratch space for six hours of processing, and requires output to be retained forever, it is recommended to define a Retention Period of three months and to allocate an SSD capacity that is sufficient for three months of output data and the scratch space. The Tiering Cue must be defined as one day to avoid a situation where the scratch space data is tiered to an object store and released from the SSD immediately afterward.
Even when time-based policies are in place, you can override them using a unique mount option called obs_direct. When this option is used, any files created or written from the associated mount point are prioritized for release immediately without first considering other file retention policies.
For a more in-depth explanation, refer to Advanced Data Lifecycle Management.
The WSA (WEKA Software Appliance) is an alternative method to install WEKA software on bare-metal servers. The WSA simplifies and accelerates the installation.
WSA is a package consisting of a base version of Linux (based on Rocky 8.6), network drivers and other required packages, WEKA software, and various diagnostic and configuration tools. Using the WSA facilitates the post-installation administration, security, and other KB updates controlled and distributed by WEKA, following a Long Term Support (LTS) plan.
The WSA generally works like any OS install disk (Linux/Windows).
Do not attempt to install the WSA using PXE boot. The WSA has a specific kickstart methodology only compatible with WMS or manual boot from ISO.
A physical server that meets the following requirements:
Boot drives: One or two identical boot drives as an installation target.
A system with two identical boot drives has the OS installed on mirrored partitions (LVM).
A system with one drive has a simple partition.
Before deploying the WSA, adhere to the following:
Download the latest release of the WSA package from dashboard.
The root password is WekaService
The WEKA user password is weka.io123
Boot the server from the WSA image. The following are some options to do that:
Copy the WSA image to an appropriate location so that the server’s BMC can mount it to a virtual CDROM/DVD.
Depending on the server manufacturer, consult the documentation for the server’s BMC (for example, iLO, iDRAC, and IPMI) for detailed instructions on mounting and booting from a bootable WSA image, such as:
A workstation or laptop sent to the BMC through the web browser.
An SMB share in a Windows server or a Samba server.
Once you boot the server, the WSA installs the WEKA OS, drivers, WEKA software. and other packages automatically and unattended (no human interaction required).
Depending on network speed, this can take about 10-60 mins (or more) per server.
Once the WSA installation is complete and the server is rebooted, configure the WSA.
Log-in to the server using one of the following methods:
BMC's Console
Cockpit web interface on port 9090
Username/password: root/WekaService.
Run the OS through the BMC’s Console. See the specific manufacturer’s BMC documentation.
Run the OS through the Cockpit Web Interface on port 9090 of the OS management network.
If you don’t know the WSA hostname or IP address, go to the console and press the Return key a couple of times until it prompts the URL of the WSA OS Web Console (Cockpit) on port 9090.
When the server boots for the first time, the WSA automatically installs the WEKA software on the bare metal servers unattended.
Then the server reboots, it runs with WEKA in STEM mode.
Set the following networking details:
Hostname
IP addresses for network interfaces, including:
Each server has the WEKA Tools pre-installed in /opt/tools, including:
wekanetperf: This tool runs iperf between the servers to ensure line rate can be achieved.
wekachecker: This tool checks a variety of network settings and more. For details, see .
bios_tool
Verify that the WEKA software is installed and running on the server.
Log-in to the server and run the command weka status.
The server provides a status report indicating the system is in STEM mode, and is ready for the cluster configuration.
The Terraform package includes a main.tf file you create according to your deployment needs.
Applying the created main.tf file performs the following:
Creates VPC networks and subnets on the GCP project.
Deploys GCP instances.
Installs the WEKA software.
Configures the WEKA cluster.
Additional GCP objects.
Before installing the WEKA software on GCP, the following prerequisites must be met:
: It is pre-installed if you use the Cloud Shell.
: It is pre-installed if you use the Cloud Shell. Ensure the Terraform version meets the minimum required version specified in the section of the GCP-WEKA deployment Terraform package.
Initialize the Terraform module using terraform init from the local directory. This command initializes a new or existing Terraform working directory by creating initial files, loading any remote state, downloading modules, and more.
Review the and use it as a reference for creating the main.tf according to your deployment specifics on GCP.
Tailor the main.tf file to create SMB-W or NFS protocol clusters by adding the relevant code snippet. Adjust parameters like the number of gateways, instance types, domain name, and share naming:
SMB-W
NFS
Add WEKA POSIX clients (optional): If needed, add to support your workload by incorporating the specified variables into the main.tf file:
Once you complete the main.tf settings, apply it: Run terraform apply.
After applying the main.tf, the Terraform module updates the configuration as follows:
Service account creation:
Format of the service account name: <prefix>-deployment@<project name>.iam.gserviceaccount.com
Assigned roles:
Additional roles can be assigned to the created service account (if working with relevant resources):
To create a worker pool:
To create a new bucket (for Terraform state and WEKA OBS):
To use an existing bucket (for Terraform state and WEKA OBS):
Upgrading the WEKA version on the cloud is similar to the standard WEKA upgrade process. However, in a cloud configured with auto-scaling, the new instances created by the scale-up must be configured with the new WEKA version.
Before you begin
Ensure the cluster does not undergo a scale-up or scale-down process before and during the WEKA version upgrade.
Procedure
Perform the upgrade process. See .
Update the weka_version parameter in the main.tf file.
Run terraform apply.
If a rollback is required or the WEKA cluster is no longer required on GCP, first terminate the WEKA cluster and then use the terraform destroy action.
The termination of the WEKA cluster can also be used if you need to retain the GCP resources (such as VPCs and cloud functions to save time on the next deployment) and then deploy a new WEKA cluster when you are ready.
To terminate the WEKA cluster, run the following command (replace the trigger_url with the actual trigger URL and Cluster_Name with the actual cluster name):
If you do not know the trigger URL or cluster name, run the terraform outputcommand to display them.
Once the WEKA cluster is terminated, you can deploy a new WEKA cluster or run the terraform destroy action.
The region must support the services used in WEKA on GCP. The following sections list these services and the regions that support them.
Cloud Build API
Cloud Deployment Manager V2 API
Cloud DNS API
Cloud Functions API
Cloud Logging API
Cloud Resource Manager API
Cloud Scheduler API
Compute Engine API
Secret Manager API
Serverless VPC Access API
Service Usage API
Workflow Executions API
Workflows API
Other services used or enabled:
App Engine
IAM
Google Cloud Storage
To ensure support for a specific region, it must meet the requirements listed .
Related information
The Snap-To-Object feature enables the committing of all the data of a specific snapshot to an object store.
Using the CLI, you can:
Command: weka fs snapshot upload
Use the following command line to upload an existing snapshot:
weka fs snapshot upload <file-system> <snapshot> [--site site]
Parameters
Command: weka fs download
Use the following command line to create (or recreate) a filesystem from an existing snapshot:
weka fs download <name> <group-name> <total-capacity> <ssd-capacity> <obs-bucket> <locator> [--additional-obs additional-obs] [--snapshot-name snapshot-name] [--access-point access-point]
When creating a filesystem from a snapshot, a background cluster task automatically prefetches its metadata, providing better latency for metadata queries.
Parameters
The locator can be a previously saved locator for disaster scenarios, or you can obtain the locator using the weka fs snapshot command on a system with a live filesystem with snapshots.
If you need to pause and resume the download process, use the command: weka cluster task pause / resume. To abort the download process, delete the downloaded filesystem directly. For details, see .
The Synchronous Snap feature, which allows incremental snapshots to be downloaded from an object store, was temporarily disabled in version 4.2.3. It has been re-enabled in version 4.3.0.
When recovering a snapshot residing on a remote object store, it is required to define the object store bucket containing the snapshot as a local bucket.
A remote object store has restrictions over the download, and we want to use a different local object store due to the QoS reasons explained in .
To recover a snapshot residing on a remote object store, create a new filesystem from this snapshot as follows:
Add a new local object store, using weka fs tier obs add CLI command.
Add a local object store bucket, referring to the bucket containing the snapshot to recover, using weka fs tier s3 add.
Download the filesystem, using weka fs download.
This page describes how to manage snapshots using the GUI.
Using the GUI, you can:
Procedure
To display all snapshots, select Manage > Snapshots from the menu. The Snapshots page opens.
2. To display a snapshot of a selected filesystem, do one of the following:
Select the Filesystem filter. Then, select the filesystem from the list.
From the menu, select Manage > Filesystems. From the filesystem, select the three dots, and from the menu, select Go To Snapshot.
You can create a snapshot from the Snapshots page or directly from the Filesystems page.
Before you begin
Create a directory for filesystem-level snapshots that serves as the access point for snapshots.
Procedure:
Do one of the following:
From the menu, select Manage > Snapshots. From the Snapshots page, select +Create. The Create Snapshot dialog opens.
From the menu, select Manage > Filesystems. From the Filesystems page, select the three dots, and from the menu, select Create Snapshot (the source filesystem is automatically set).
On the Create Snapshot dialog set the following properties:
Name: A unique name for the filesystem snapshot.
Access Point: A name of the newly-created directory for filesystem-level snapshots that serves as the snapshot's access point. If you do not specify the access point, the system sets it automatically (in GMT format).
You can duplicate a snapshot (clone), which enables creating a writable snapshot from a read-only snapshot.
Procedure
From the menu, select Manage > Snapshots.
From the Snapshots page, select the three dots of the snapshot you want to duplicate, and from the menu, select Duplicate Snapshot.
In the Duplicate Snapshot dialog, set the properties like you create a snapshot. The source filesystem and source snapshot are already set.
Select Duplicate.
When deleting a snapshot, consider the following guidelines:
Deleting a snapshot parallel to a snapshot upload to the same filesystem is impossible. Uploading a snapshot to a remote object store can take time. Therefore, it is advisable to delete the desired snapshot before uploading it to the remote object store.
When uploading snapshots to both local and remote object stores. While the local and remote uploads can progress in parallel, consider the case of a remote upload in progress. A snapshot is deleted, and later a snapshot is uploaded to the local object store. In this scenario, the local snapshot upload waits for the pending deletion of the snapshot (which happens only once the remote snapshot upload is done).
Procedure
From the menu, select Manage > Snapshots.
From the Snapshots page, select the three dots of the snapshot you want to delete, and from the menu, select Remove.
In the Deletion Of Snapshot message, select Yes to delete the snapshot.
Restoring a snapshot to a filesystem or another snapshot (target) modifies the data and metadata of the target.
Before you begin
If you restore the snapshot to a filesystem, make sure to stop the IO services of the filesystem during the restore operation.
Procedure
From the menu, select Manage > Snapshots.
From the Snapshots page, select the three dots of the snapshot you want to restore, and from the menu, select Restore To.
In the Restore To dialog, select the destination: Filesystem or Snapshot.
You can update the snapshot name and access point properties.
Procedure
From the menu, select Manage > Snapshots.
From the Snapshots page, select the three dots of the snapshot you want to update, and from the menu, select Edit.
Modify the Name and Access Point properties as required.
This page describes how to view and manage filesystems using the CLI.
Using the CLI, you can perform the following actions:
This page shows how to create CloudFormation templates using an API call. The same API calls are used by the Self-Service Portal to generate the CloudFormation template before redirecting to AWS.
This page describes the three entity types relevant to data storage in the WEKA system.
A WEKA filesystem operates much like a conventional on-disk filesystem but distributes the data across all servers in the cluster. Unlike traditional filesystems, it is not tied to any specific physical object within the WEKA system and serves as a root directory with space limitations.
The WEKA system supports up to 1024 filesystems, distributing them equally across all SSDs and CPU cores assigned to the cluster. This ensures that tasks like allocating new filesystems or resizing existing ones are immediate, without operational constraints.
Each filesystem is linked to a predefined filesystem group, each with a specified capacity limit. For those belonging to tiered filesystem groups, additional constraints, including a total capacity limit and an SSD capacity cap, apply.
This page describes how to manage quotas using the CLI.
Using the CLI, you can:
curl http://<EXISTING-BACKEND-IP>:14000/dist/v1/install | sudo sh # Install the agent
sudo weka version get 4.2.7.64 # Get the full software
sudo weka version set 4.2.7.64 # Set a default versionweka cluster containers
#Expected response example
CONTAINER ID HOSTNAME CONTAINER IPS STATUS RELEASE FAILURE DOMAIN CORES MEMORY LAST FAILURE UPTIME
42 protocol-node1 frontend0 192.168.114.31 UP 4.2.7.64 AUTO 1 1.47 GB 0:09:54h
43 protocol-node2 frontend0 192.168.114.115 UP 4.2.7.64 AUTO 1 1.47 GB 0:09:08h
44 protocol-node3 frontend0 192.168.114.13 UP 4.2.7.64 AUTO 1 1.47 GB 0:04:46hsudo weka local setup container --name frontend0 --only-frontend-cores --cores 1 --join-ips <EXISTING-BACKEND-IP> --allow-protocols true$ weka fs tier capacity
FILESYSTEM BUCKET TOTAL CONSUMED CAPACITY USED CAPACITY RECLAIMABLE% RECLAIMABLE THRESHOLD%
bmrb wekalow-bmrb 0 B 0 B 0.00 10.00
cam_archive wekalow-archive 20.39 TB 18.80 TB 7.79 10.00
nmr_backup wekalow-nmrbackup 519.07 GB 518.05 GB 0.19 10.00
$ weka fs tier capacity --filesystem cam_archive
FILESYSTEM BUCKET TOTAL CONSUMED CAPACITY USED CAPACITY RECLAIMABLE% RECLAIMABLE THRESHOLD%
cam_archive wekalow-archive 20.39 TB 18.80 TB 7.79 10.00
us-east4
Virginia, United States
us-east5
Columbus, United States
us-west1
Oregon, United States
us-west2
Los Angeles, United States
us-west3
Salt Lake City, United States
us-west4
Las Vegas, United States
asia-south1
Mumbai, India
asia-south2
Delhi, India
asia-southeast1
Jurong West, Singapore
asia-southeast2
Jakarta, Indonesia
australia-southeast1
Sydney, Australia
europe-west6
Zurich, Switzerland
europe-central2
Warsaw, Poland
northamerica-northeast1
Montréal, Canada
southamerica-east1
São Paulo, Brazil
southamerica-west1
Santiago, Chile
us-central1
Iowa, United States
us-east1
South Carolina, United States
asia-east1
Changhua County, Taiwan
asia-east2
Hong Kong
asia-northeast1
Tokyo, Japan
asia-northeast2
Osaka, Japan
asia-northeast3
Seoul, South Korea
europe-north1
Hamina, Finland
europe-west1
St. Ghislain, Belgium
europe-west2
London, England
europe-west3
Frankfurt, Germany
europe-west4
Eemshaven, Netherlands
Writable: Determines whether to set the snapshot to be writable.
Source Filesystem: The source filesystem from which to create the snapshot.
Upload to local object store: Determines whether to upload the snapshot to a local object store. You can also upload the snapshot later (see Snap-To-Object).
Upload to remote object store: Determines whether to upload the snapshot to a remote object store. You can also upload the snapshot later.
Select Create.











SSD capacity of the downloaded filesystem.
obs-bucket*
Object store name for tiering.
locator*
Object store locator obtained from a previously successful snapshot upload.
additional-obs
An additional object store name.
If the data to recover resides in two object stores (a second object store attached to the filesystem, and the filesystem has not undergone full migration), this object store is attached in read-only mode.
The snapshot locator must be in the primary object store specified in the obs parameter.
snapshot-name
The downloaded snapshot name.
The uploaded snapshot name.
access-point
The downloaded snapshot access point.
The uploaded access point.
If the recovered filesystem should also be tiered, add a local object store bucket for tiering using weka fs tier s3 add.
Detach the initial object store bucket from the filesystem.
Assuming you want a remote backup to this filesystem, attach a remote bucket to the filesystem.
Remove the local object store bucket and local object store created for this procedure.
file-system*
Name of the filesystem
snapshot*
Name of the snapshot of the <file-system> filesystem to upload.
site*
Location for the snapshot upload.
Mandatory only if both local and remote buckets are attached.
Possible values: local or remote
Auto-selected if only one bucket for upload is attached.
name*
Name of the filesystem to create.
group-name*
Name of the filesystem group in which the new filesystem is placed.
total-capacity*
The total capacity of the downloaded filesystem.
ssd-capacity*
The Compute Engine and Workflows API services must be enabled to allow the following services:
The user running the Terraform module requires the following roles to run the terraform apply:
artifactregistry.googleapis.com
cloudbuild.googleapis.com
cloudfunctions.googleapis.com
cloudresourcemanager.googleapis.com
cloudscheduler.googleapis.com
compute.googleapis.com
dns.googleapis.com
eventarc.googleapis.com
iam.googleapis.com
secretmanager.googleapis.com
servicenetworking.googleapis.com
serviceusage.googleapis.com
vpcaccess.googleapis.com
workflows.googleapis.comroles/cloudfunctions.admin
roles/cloudscheduler.admin
roles/compute.admin
roles/compute.networkAdmin
roles/compute.serviceAgent
roles/dns.admin
roles/iam.serviceAccountAdmin
roles/iam.serviceAccountUser
roles/pubsub.editor
roles/resourcemanager.projectIamAdmin
roles/secretmanager.admin
roles/servicenetworking.networksAdmin
roles/storage.admin
roles/vpcaccess.admin
roles/workflows.adminsmb_protocol_gateways_number = 3
smb_protocol_gateway_instance_type = "c2-standard-8"
smbw_enabled = true
smb_domain_name = "CUSTOMER_DOMAIN"
smb_share_name = "SPECIFY_SMB_SHARE_NAMING"
smb_setup_protocol = truenfs_protocol_gateways_number = 2
nfs_protocol_gateway_instance_type = "c2-standard-8"
nfs_setup_protocol = trueclients_number = 2
client_instance_type = "c2-standard-8"roles/cloudfunctions.developer
roles/compute.serviceAgent
roles/compute.loadBalancerServiceUser
roles/pubsub.subscriber
roles/secretmanager.secretAccessor
roles/vpcaccess.serviceAgent
roles/workflows.invokerroles/compute.networkAdmin
roles/servicenetworking.networksAdmin
roles/cloudbuild.workerPoolOwnerroles/storage.adminroles/storage.objectAdmincurl -m 70 -X POST ${google_cloudfunctions_function.terminate_cluster_function.https_trigger_url} \
-H "Authorization:bearer $(gcloud auth print-identity-token)" \
-H "Content-Type:application/json" \
-d '{"name":"Cluster_Name"}'Boot type: UEFI.
If errors occur during installation and the installation halts (no error messages appear), use the system console to review the logs in /tmp. The primary log is /tmp/ks-pre.log.
To get a command prompt from the Installation GUI, do one of the following:
On macOS, type ctrl+option+f2
On Windows, type ctrl+alt+f2.
Burn the WSA image to a DVD or USB stick and boot the server from this physical media.
Dataplane network interfaces (typically 1 or 2. Can be several up to 8).
DNS settings and/or an /etc/hosts file.
Network gateways and routing table adjustments as necessary.
Timeserver configuration.



I3en
i3en.2xlarge, i3en.3xlarge, i3en.6xlarge, i3en.12xlarge, i3en.24xlarge
The following EC2 instance models can operate as client instances. The default EC2 instance model for clients is c5.2xlarge.
M5
m5.xlarge, m5.2xlarge, m5.4xlarge, m5.8xlarge, m5.12xlarge, m5.16xlarge, m5.24xlarge
M5n
m5n.xlarge, m5n.2xlarge, m5n.4xlarge, m5n.8xlarge, m5n.12xlarge, m5n.16xlarge, m5n.24xlarge, m5dn.xlarge, m5dn.2xlarge, m5dn.4xlarge, m5dn.8xlarge, m5dn.12xlarge, m5dn.16xlarge, m5dn.24xlarge
M6a
m6a.xlarge, m6a.2xlarge, m6a.4xlarge, m6a.8xlarge, m6a.12xlarge, m6a.16xlarge, m6a24xlarge, m6a.32xlarge, m6a.48xlarge
M6i
m6i.xlarge, m6i.2xlarge, m6i.4xlarge, m6i.8xlarge, m6i.12xlarge, m6i.16xlarge, m6i.24xlarge, m6i.32xlarge
M6id
m6id.xlarge, m6id.2xlarge, m6id.4xlarge, m6id.8xlarge, m6id.12xlarge, m6id.16xlarge, m6id.24xlarge, m6id.32xlarge
C3
c3.2xlarge, c3.4xlarge, c3.8xlarge
C5
c5.2xlarge, c5.4xlarge, c5.9xlarge, c5.12xlarge, c5.18xlarge, c5.24xlarge
C5a
c5a.2xlarge , c5a.4xlarge, c5a.8xlarge, c5a.12xlarge, c5a.16xlarge, c5a.24xlarge
C5ad
c5ad.2xlarge , c5ad.4xlarge, c5ad.8xlarge, c5ad.12xlarge, c5ad.16xlarge, c5ad.24xlarge
C5n
c5n.2xlarge, c5n.4xlarge, c5n.9xlarge, c5n.18xlarge
R5
r5.xlarge, r5.2xlarge, r5.4xlarge, r5.8xlarge, r5.12xlarge, r5.16xlarge, r5.24xlarge
R5n
r5n.xlarge, r5n.2xlarge, r5n.4xlarge, r5n.8xlarge, r5n.12xlarge, r5n.16xlarge, r5n.24xlarge
R6a
r6a.xlarge, r6a.2xlarge, r6a.4xlarge, r6a.8xlarge, r6a.12xlarge, r6a.16xlarge, r6a.32xlarge, r6a.48xlarge
R6i
r6i.xlarge, r6i.2xlarge, r6i.4xlarge, r6i.8xlarge, r6i.12xlarge, r6i.16xlarge, r6i.24xlarge, r6i.32xlarge
R6id
r6id.xlarge, r6id.2xlarge, r6id.4xlarge, r6id.8xlarge, r6id.12xlarge, r6id.16xlarge, r6id.24xlarge, r6id.32xlarge
G3
g3.4xlarge, g3.8xlarge, g3.16xlarge
G4dn
g4dn.2xlarge, g4dn.4xlarge, g4dn.8xlarge, g4dn.12xlarge, g4dn.16xlarge
G5
g5.xlarge, g5.2xlarge, g5.4xlarge, g5.8xlarge, g5.12xlarge, g5.16xlarge
Inf1
inf1.2xlarge, inf1.6xlarge, inf1.24xlarge
Inf2
inf2.xlarge, inf2.8xlarge, inf2.24xlarge, inf2.48xlarge
I3en
i3en.xlarge, i3en.2xlarge, i3en.3xlarge, i3en.6xlarge, i3en.12xlarge, i3en.24xlarge
HPc7a
hpc7a.2xlarge, hpc7a.48xlarge, hpc7a.96xlarge
Related information
Command: weka fs
Use this command to view information on the filesystems in the WEKA system.
Command: weka fs create
Use the following command line to create a filesystem:
weka fs create <name> <group-name> <total-capacity> [--ssd-capacity <ssd-capacity>] [--thin-provision-min-ssd <thin-provision-min-ssd>] [--thin-provision-max-ssd <thin-provision-max-ssd>] [--max-files <max-files>] [--encrypted] [--obs-name <obs-name>] [--auth-required <auth-required>] [--data-reduction]
Parameters
name*
Descriptive label for the filesystem, limited to 32 characters and excluding slashes (/) or backslashes (\).
group-name*
Name of the filesystem group to which the new filesystem is to be connected.
total-capacity*
Total capacity of the new filesystem. Minimum value: 1GiB.
To create a new filesystem, the SSD space for the filesystem must be free and unprovisioned. When using thin-provisioned filesystems, that might not be the case. SSD space can be occupied for the thin-provisioned portion of other filesystems. Even if those are tiered, and data can be released (to object-store) or deleted, the SSD space can still get filled when data keeps being written or promoted from the object-store.
To create a new filesystem, in this case, use the weka fs reserve CLI command. Once enough space is cleared from the SSD (either by releasing to object-store or explicitly deleting data), it is possible to create the new filesystem using the reserved space.
Command: weka fs update
Use the following command line to edit an existing filesystem:
weka fs update <name> [--new-name=<new-name>] [--total-capacity=<total-capacity>] [--ssd-capacity=<ssd-capacity>] [--thin-provision-min-ssd <thin-provision-min-ssd>] [--thin-provision-max-ssd <thin-provision-max-ssd>] [--max-files=<max-files>] [--auth-required=<auth-required>]
Parameters
name*
Name of the filesystem to edit.
new-name
New name for the filesystem.
total-capacity
Total capacity of the edited filesystem.
ssd-capacity
SSD capacity of the edited filesystem. Minimum value: 1GiB.
thin-provision-min-ssd
For filesystems, this is the minimum SSD capacity that is ensured to be always available to this filesystem. Minimum value: 1GiB.
Command: weka fs delete
Use the following command line to delete a filesystem:
weka fs delete <name> [--purge-from-obs]
Parameters
name*
Name of the filesystem to delete.
purge-from-obs
For a tiered filesystem, if set, all filesystem data is deleted from the object store bucket.
False
Using purge-from-obs removes all data from the object-store. This includes any backup data or snapshots created from this filesystem (if this filesystem has been downloaded from a snapshot of a different filesystem, it will leave the original snapshot data intact).
If any of the removed snapshots have been (or are) downloaded and used by a different filesystem, that filesystem will stop functioning correctly, data might be unavailable and errors might occur when accessing the data.
It is possible to either un-tier or migrate such a filesystem to a different object store bucket before deleting the snapshots it has downloaded.
To generate a CloudFormation template, it is first necessary to decide which WEKA system version is to be installed. This is performed using the https://<token>@get.weka.io/dist/v1/release API which provides a list of all available versions:
This list of releases available for installation is sorted backward from the most recent release. By default, 50 results are provided per page. To receive more results, use the page=N query parameter to receive the Nth page.
Each release contains an ID field that identifies the release. In the examples below, version 3.6.1 has been used.
To generate a CloudFormation template, make a POST request to the https://<token>@get.weka.io/dist/v1/aws/cfn/<version>API:
In the example above, a template is generated for a cluster with 10 i3en.2xlarge backend instances and 2 r3.xlarge client instances. For details, see the Deployment Types and Supported EC2 instance types sections.
The https://<token>@get.weka.io/dist/v1/aws/cfn/<version> API provides a JSON object with a cluster property. cluster is a list of instance types, roles, and counts:
Property
Description
role
Either backend or client.
See the section.
instance_type
One of the supported instance types, according to the role and supported instances.
See the section.
count
The number of instances of this type to include in the template.
ami_id
When role is client, it is possible to specify a custom AMI-ID.
For details, see the section.
net
Either dedicated or shared, in client role only.
For details, see the section.
It is possible to specify multiple groups of instances by adding more role/instance_type/count objects to the clusterarray, as long as there are at least 6 backend instances (the minimum number of backend instances required to deploy a cluster).
When specifying an ami_id in client groups, the specified AMI will be used when launching the client instances. The Weka system will be installed on top of this AMI in a separate EBS volume.
When ami_id is not specified, the client instances are launched with the latest Amazon Linux supported by the Weka system version selected to be installed.
Note the following when using a custom AMI-ID:
AMIs are stored per region. Make sure to specify an AMI-ID that matches the region in which the CloudFormation template is deployed.
The AMI operating system must be one of the supported operating systems listed in the Prerequisites and compatibility section of the version installed. If the AMI defined is not supported or has an unsupported operating system, the installation may fail, and the CloudFormation stack will not be created successfully.
By default, both client and backend instances are launched in the dedicated networking mode. Although this cannot be changed for backends, it can be controlled for client instances.
Dedicated networking means an ENI is created for internal cluster traffic in the client instances. This allows the WEKA system to bypass the kernel and provide throughput only limited by the instance network.
In shared networking, the client shares the instance’s network interface with all traffic passing through the kernel. Although slower, this mode is sometimes desirable when an ENI cannot be allocated or if the operating system does not allow more than one NIC.
The returned result is a JSON object with two properties: url and quick_create_stack.
The url property is a URL to an S3 object containing the generated template.
To deploy the CloudFormation template through the AWS console, a quick_create_stack property contains links to the console for each public AWS region. These links are pre-filled with your API token as a parameter to the template.
It is also possible to receive the template directly from the API call without saving it in a bucket. To do this, use a ?type=templatequery parameter:
The CloudFormation stack parameters are described in the Cluster CloudFormation Stack section.
The CloudFormation template contains an instance role that allows the WEKA cluster instances to call the following AWS APIs:
ec2:DescribeInstances
ec2:DescribeNetworkInterfaces
ec2:AttachNetworkInterface
ec2:CreateNetworkInterface
ec2:ModifyNetworkInterfaceAttribute
ec2:DeleteNetworkInterface
In case tiering is configured, additional AWS APIs permissions are given:
s3:DeleteObject
s3:GetObject
s3:PutObject
s3:ListBucket
Once a CloudFormation template has been generated, it is possible to create a stack using the AWS console or the AWS CLI.
When the deployment is complete, the stack status updates to CREATE_COMPLETE, and it is possible to access the WEKA cluster GUI by going to the Outputs tab of the CloudFormation stack and clicking the GUI link.
If the deployment is unsuccessful, see Troubleshooting for the resolution of common deployment issues.
The available SSD capacity of individual filesystems cannot exceed the total SSD net capacity allocated to all filesystems. This structured approach ensures effective resource management and optimal performance within the WEKA system.
Thin provisioning, a dynamic SSD capacity allocation method, addresses user needs on demand. In this approach, the filesystem's capacity is defined by a minimum guaranteed capacity and a maximum capacity, which can virtually exceed the available SSD capacity.
The system optimally allocates more capacity, up to the total available SSD capacity, for users who use their allocated minimum capacity. Conversely, as users free up space by deleting files or transferring data, the idle space undergoes reclamation, repurposing it for other workloads that require SSD capacity.
Thin provisioning proves beneficial in diverse scenarios:
Tiered filesystems: On tiered filesystems, available SSD capacity is used for enhanced performance and can be released to the object store when needed by other filesystems.
Auto-scaling groups: Thin provisioning facilitates automatic expansion and reduction (shrinking) of the filesystem's SSD capacity when using auto-scaling groups, ensuring optimal performance.
Filesystems separation per project: Creating separate filesystems for each project becomes efficient with thin provisioning, especially when administrators don't anticipate full simultaneous usage of all filesystems. Each filesystem is allocated a minimum capacity but can consume more based on the actual available SSD capacity, offering flexibility and resource optimization.
Number of files or directories: Up to 6.4 trillion (6.4 * 10^12)
Number of files in a single directory: Up to 6.4 billion (6.4 * 10^9)
Total capacity with object store: Up to 14 EB
Total SSD capacity: Up to 512 PB
File size: Up to 4 PB
WEKA introduces a cluster-wide data reduction feature that can be activated for individual filesystems. This capability incorporates block-variable differential compression and advanced de-duplication techniques across all filesystems, significantly reducing the required storage capacity for user data and delivering substantial cost savings.
The effectiveness of the compression ratio hinges on the specific workload, proving particularly efficient for text-based data, large-scale unstructured datasets, log analysis, databases, code repositories, and sensor data.
The data reduction applies exclusively to user data (not metadata) per filesystem. The data reduction can be enabled only on thin-provision, non-tiered, and unencrypted filesystems within a cluster holding a valid Data Efficiency Option (DEO) license.
Data reduction is a post-process activity. New data written to the cluster is written uncompressed. The data reduction process runs as a background task with lower priority than tasks serving user IO requests. The data reduction starts when enough data is written to the filesystems.
Data reduction tasks:
Ingestion:
Clusterization: Applied on data blocks at the 4K block level. The system identifies similarity across uncompressed data in all filesystems enabled for data reduction.
Compression: The system reads similar and unique blocks, compressing each type separately. Compressed data is then written to the filesystem.
Defragmentation:
Uncompressed data related to successful compression is marked for deletion.
The defrag process waits for sufficient blocks to be invalidated and then permanently deletes them.
WEKA ensures security by offering encryption for data at rest (residing on SSD and object store) and data in transit. This security feature is activated by enabling the filesystem encryption option. The decision on whether a filesystem should be encrypted is crucial during the filesystem creation process.
To create encrypted filesystems, deploying a Key Management System (KMS) is imperative, reinforcing the protection of sensitive data.
Related topics
In addition to the capacity constraints, each filesystem in WEKA has specific limitations on metadata. The overall system-wide metadata cap depends on the SSD capacity allocated to the WEKA system and the RAM resources allocated to the WEKA system processes.
WEKA carefully tracks metadata units in RAM. If the metadata units approach the RAM limit, they are intelligently paged to the SSD, triggering alerts. This proactive measure allows administrators sufficient time to increase system resources while sustaining IO operations with minimal performance impact.
By default, the metadata limit linked to a filesystem correlates with the filesystem's SSD size. However, users have the flexibility to override this default by defining a filesystem-specific max-files parameter. This logical limit empowers administrators to regulate filesystem usage, providing the flexibility to update it as needed.
The cumulative metadata limits across all filesystems can surpass the system's entire metadata information that fits in RAM. In potential impact scenarios, the system optimizes by paging the least recently used units to disk, ensuring operational continuity with minimal disruption.
Every metadata unit within the WEKA system demands 4 KB of SSD space (excluding tiered storage) and occupies 20 bytes of RAM.
Throughout this documentation, the restriction on metadata per filesystem is denoted as the max-files parameter. This parameter includes the files' count and respective sizes.
The following table outlines the requisite metadata units based on file size. These specifications apply to files stored on SSDs or tiered to object stores.
< 0.5 MB
1
A filesystem containing 1 billion files, each sized at 64 KB, requires 1 billion metadata units.
0.5 MB - 1 MB
2
A filesystem containing 1 billion files, each sized at 750 KB, requires 2 billion metadata units.
> 1 MB
2 for the first 1 MB plus 1 per MB for the rest MBs
A filesystem containing 1 million files, each sized at 129 MB, requires 130 million metadata units. This calculation includes 2 units for the first 1 MB and an additional unit per MB for the subsequent 128 MB.
A filesystem containing 10 million files, each sized at 1.5 MB, requires 30 million metadata units.
A filesystem containing 10 million files, each sized at 3 MB, requires 40 million metadata units.
Related topics
The maximum size for extended attributes (xattr) of a file or directory is 1024 bytes. This attribute space is used by Access Control Lists (ACLs) and Alternate Data Streams (ADS) within an SMB cluster and when configuring SELinux. When using Windows clients, named streams in smb-w are saved in the file’s xattr.
Given its finite capacity, exercise caution when using lengthy or complex ACLs and ADS on a WEKA filesystem.
When encountering a message indicating the file size exceeds the limit allowed and cannot be saved, carefully decide which data to retain. Strategic planning and selective use of ACLs and ADS contribute to optimizing performance and stability.
Within the WEKA system, object stores are an optional external storage medium strategically designed to store warm data. These object stores, employed in tiered WEKA system configurations, can be cloud-based, located in the same location as the WEKA cluster, or at a remote location.
WEKA extends support for object stores, leveraging their capabilities for tiering (both tiering and local snapshots) and backup (snapshots only). Both tiering and backup functionalities can be concurrently used for the same filesystem, enhancing flexibility.
The optimal usage of object store buckets comes into play when a cost-effective data storage tier is imperative and traditional server-based SSDs prove insufficient in meeting the required price point.
An object store bucket definition comprises crucial components: the object store DNS name, bucket identifier, and access credentials. The bucket must remain dedicated to the WEKA system, ensuring exclusivity and security by prohibiting access from other applications.
Moreover, the connectivity between filesystems and object store buckets extends beyond essential storage. This connection proves invaluable in data lifecycle management and facilitates the innovative Snap-to-Object features, offering a holistic approach to efficient data handling within the WEKA system.
Related topics
Within the WEKA system, the organization of filesystems takes place through the creation of filesystem groups, with a maximum limit set at eight groups.
Each of these filesystem groups comes equipped with tiering control parameters. When filesystems are tiered and have associated object stores, the tiering policy remains consistent for all tiered filesystems residing within the same filesystem group. This unification ensures streamlined management and unified control over tiering strategies within the WEKA system.
Related topics
i3en.2xlarge, i3en.3xlarge, i3en.6xlarge, i3en.12xlarge, i3en.24xlarge
The following EC2 instance types can operate as client instances.
C5
c5.2xlarge, c5.4xlarge, c5.9xlarge, c5.12xlarge, c5.18xlarge, c5.24xlarge
C5n
c5n.2xlarge, c5n.4xlarge, c5n.9xlarge, c5n.18xlarge
C6a
c6a.2xlarge, c6a.4xlarge, c6a.8xlarge, c6a.12xlarge, c6a.16xlarge, c6a.32xlarge, c6a.48xlarge
C6in
c6in.2xlarge, c6in.4xlarge, c6in.8xlarge, c6in.12xlarge, c6in.16xlarge, c6in.24xlarge, c6in.32xlarge
C7i
c7i.2xlarge, c7i.4xlarge, c7i.8xlarge, c7i.12xlarge, c7i.16xlarge, c7i.24xlarge, cC7i.48xlarge
Related topics
Related information
I3en
Command: weka fs quota set / weka fs quota set-default
Before using the commands, verify that a mount point to the relevant filesystem is set.
Use the following commands to set a directory quota:
weka fs quota set <path> [--soft soft] [--hard hard] [--grace grace] [--owner owner]
It is also possible to set a default quota on a directory. It does not account for this directory (or existing child directories) but will automatically set the quota on new directories created directly under it.
Use the following command to set a default quota of a directory:
weka fs quota set-default <path> [--soft soft] [--hard hard] [--grace grace] [--owner owner]
path*
Path to the directory to set the quota. The relevant filesystem must be mounted when setting the quota.
soft
Soft quota limit.
Exceeding this number is displayed as exceeded quota but it is not enforced until the grace period is over.
The capacity can be in decimal or binary units.
Format: 1GB, 1TB, 1GiB, 1TiB, unlimited
unlimited
hard
Hard quota limit.
Exceeding this number does not allow more writes before clearing some space in the directory.
The capacity can be in decimal or binary units.
Format: 1GB, 1TB, 1GiB, 1TiB, unlimited
unlimited
Command: weka fs quota list / weka fs quota list-default
Use the following command to list the directory quotas (by default, only exceeding quotas are listed) :
weka fs quota list [fs-name] [--snap-name snap-name] [--path path] [--under under] [--over over] [--quick] [--all]
fs-name
Shows quota report only on the specified valid filesystem.
All filesystems
snap-name
Shows the quota report from the time of the snapshot.
Must be a valid snapshot name and be given along with the corresponding fs-name.
path
Path to a directory. Shows quota report only on the specified directory. The relevant filesystem must be mounted in the server running the query.
Use the following command to list the directory default quotas:
weka fs quota list-default [fs-name] [--snap-name snap-name] [--path path]
fs-name
Shows the default quotas only on the specified valid filesystem.
All filesystems
snap-name
Shows the default quotas from the time of the snapshot.
Must be a valid snapshot name and be given along with the corresponding fs-name.
path
Path to a directory. Shows the default quotas report only on the specified directory. The relevant filesystem must be mounted in the server running the query.
Command: weka fs quota unset / weka fs quota unset-default
Use the following commands to unset a directory quota:
weka fs quota unset <path>
Use the following command to unset a default quota of a directory:
weka fs quota unset-default <path>
path*
Path to the directory to set the quota. The relevant filesystem must be mounted when setting the quota.
This page describes how to add clients to a bare-metal cluster.
Clients run applications that access the WEKA filesystem but do not contribute CPUs or drives to the cluster. They connect solely to use the filesystems.
By default, WEKA uses Cgroups to limit or isolate resources for its exclusive use, such as assigning specific CPUs.
Cgroups (Control Groups) is a Linux kernel feature that allows you to limit, prioritize, and isolate the resource usage (CPU, memory, disk I/O, network) of a collection of processes. It helps allocate resources among user-defined groups of tasks and manage their performance effectively.
Versions of Cgroups:
CgroupsV1: Uses multiple hierarchies for different resource controllers, offering fine-grained control but with increased complexity.
CgroupsV2: Combines all resource controllers into a single unified hierarchy, simplifying management and providing better resource isolation and a more consistent interface.
WEKA requirements:
Backends and clients serving protocols: Must run on an OS with CgroupsV1 (legacy) support. CgroupsV2 is supported on backends and clients but is incompatible with protocol cluster deployments.
Cgroups mode compatibility: When setting up Cgroups on clients or backends, ensure that the Cgroups configuration (whether using CgroupsV1 or CgroupsV2) aligns with the operating system's capabilities and configuration.
The configuration of Cgroups depends on the installed operating system, and it is important that the cluster server settings match the OS configuration to ensure proper resource management and compatibility.
Customers using a supported OS with CgroupsV2 or wanting to modify the Cgroups usage can set the cgroups usage during the agent installation or by editing the service configuration file. The specified mode must match the existing Cgroups configuration in the OS.
The Cgroups setting includes the following modes:
auto: WEKA tries using CgroupsV1 (default). If it fails, the Cgroups is set to none automatically.
force: WEKA uses CgroupsV1. If the OS does not support it, WEKA fails.
force_v2: WEKA uses CgroupsV2. If the OS does not support it, WEKA fails. This mode is not supported in protocol cluster deployments.
In the installation command line, specify the required Cgroups mode (WEKA_CGROUPS_MODE).
Example:
You can set the Cgroups mode in the service configuration file for clients and backends.
Open the service configuration file /etc/wekaio/service.conf and add one of the following:
cgroups_mode=auto
cgroups_mode=force
Example:
To use the WEKA filesystems from a client, just call the mount command. The mount command automatically installs the software version, and there is no need to join the client to the cluster.
To mount a filesystem in this method, first, install the WEKA agent from one of the backend instances and then mount the filesystem.
Example:
For the first mount, this installs the WEKA software and automatically configures the client. For more information on mount and configuration options, refer to .
Configuring the client OS to mount the filesystem at boot time automatically is possible. For more information, refer to or .
Install the WEKA software.
Once the WEKA software tarball is downloaded from , run the untar command.
Run the install.sh command on each server, according to the instructions in the Install tab.
Command: weka cluster container add
Once the client is in the stem mode (this is the mode defined immediately after running the install.sh command), use the following command line on the client to add it to the cluster:
Parameters in the command line
Command: weka cluster container cores
To configure the new container as a client, run the following command:
Parameters in the command line
Command: weka cluster container net add
If a high-performance client is required and the appropriate network NIC is available, use the following command to configure the networking interface used by the client to communicate with the WEKA cluster:
Parameters
Command: weka cluster container apply
After successfully configuring the container and its network device, run the following command to finalize the configuration by activating the container:
Parameters
The planning of a WEKA system is essential before the actual installation process. It involves the planning of the following:
Total SSD net capacity and performance requirements
SSD resources
Memory resources
CPU resources
Network
A WEKA system cluster runs on a group of servers with local SSDs. To plan these servers, the following information must be clarified and defined:
Capacity: Plan your net SSD capacity. The data management to object stores can be added after the installation. In the context of the planning stage, only the SSD capacity is required.
Redundancy scheme: Define the optimal redundancy scheme required for the WEKA system, as explained in .
Failure domains: Determine whether to use failure domains (optional), and if yes, determine the number of failure domains and the potential number of servers in each failure domain, as described in , and plan accordingly.
Once all this data is clarified, you can plan the SSD net storage capacity accordingly, as defined in the . Adhere to the following information, which is required during the installation process:
Cluster size (number of servers).
SSD capacity for each server, for example, 12 servers with a capacity of 6 TB each.
Planned protection scheme, for example, 6+2.
Planned failure domains (optional).
SSD resource planning involves how the defined capacity is implemented for the SSDs. For each server, the following has to be determined:
The number of SSDs and capacity for each SSD (where the multiplication of the two should satisfy the required capacity per server).
The selected technology, NVME, SAS, or SATA, and the specific SSD models have implications on SSD endurance and performance.
The total per server memory requirements is the sum of the following requirements:
Contact the Customer Success Team to explore options for configurations requiring more than 384 GB of memory per server.
A system with 16 servers with the following details:
Number of Frontend processes: 1
Number of Compute processes: 13
Number of Drive processes: 6
Total raw capacity: 983,000 GB
Calculations:
Fixed: 2.8 GB
Frontend processes: 1 x 2.2 = 2.2 GB
Compute processes: 13 x 3.9 = 50.7 GB
Drive processes: 6 x 2 = 12 GB
Total memory requirement per server = 2.8 + 2.2 + 50.7 + 12 + 91 + 16 + 2 + 1.9 = ~179 GB
For the same system as in example 1, but with smaller files, the required memory for metadata would be larger.
For an average file size of 64 KB, the number of files is potentially up to:
~12 billion files for all servers.
~980 million files per server.
Required memory for metadata: 20 Bytes x 980 million files x 1 unit = ~19.6 GB
Total memory requirement per server = 2.8 + 2.2 + 50.7 + 12 + 91 + 16 + 2 + 19.6 = ~196 GB
The WEKA software on a client requires 5 GB minimum additional memory.
The WEKA system implements a Non-Uniform Memory Access (NUMA) aware CPU allocation strategy to maximize the overall performance of the system. The cores allocation uses all NUMAs equally to balance memory usage from all NUMAs.
Consider the following regarding the CPU allocation strategy:
The code allocates CPU resources by assigning individual cores to tasks in a cgroup.
Cores in a cgroup are not available to run any other user processes.
On systems with Intel hyper-threading enabled, the corresponding sibling cores are placed into a cgroup along with the physical ones.
Plan the number of physical cores dedicated to the WEKA software according to the following guidelines and limitations:
Dedicate at least one physical core to the operating system; the rest can be allocated to the WEKA software.
Generally, it is recommended to allocate as many cores as possible to the WEKA system.
A backend server can have as many cores as possible. However, a container within a backend server can have a maximum of 19 physical cores.
On the client side, the WEKA software consumes a single physical core by default. The WEKA software consumes two logical cores if the client is configured with hyper-threading.
If the client networking is defined as UDP, dedicated CPU core resources are not allocated to WEKA. Instead, the operating system allocates CPU resources to the WEKA processes like any other.
WEKA backend servers support connections to both InfiniBand and Ethernet networks, using (NICs). When deploying backend servers, ensure that all servers in the WEKA system are connected using the same network technology for each type of network.
InfiniBand connections are prioritized over Ethernet links for data traffic. Both network types must be operational to ensure system availability, so consider adding redundant ports for each network type.
Clients can connect to the WEKA system over either InfiniBand or Ethernet.
A network port can be dedicated exclusively to the WEKA system or shared between the WEKA system and other applications.
Clients can be configured with networking as described above to achieve the highest performance and lowest latency; however, this setup requires compatible hardware and dedicated CPU core resources. If compatible hardware is not available or a dedicated CPU core cannot be allocated to the WEKA system, client networking can instead be configured to use the kernel’s UDP service. This configuration results in reduced performance and increased latency.
(all paths)
Explore the Snap-To-Object feature, a capability facilitating the seamless data transfer from a designated snapshot to an object store.
The Snap-To-Object feature enables the consolidation of all data from a specific snapshot, including filesystem metadata, every file, and all associated data, into an object store. The complete snapshot data can be used to restore the data on the WEKA cluster or another cluster running the same or a higher WEKA version.
The Snap-To-Object feature is helpful for a range of use cases, as follows:
On-premises and cloud use cases
Cloud-only use cases
Hybrid cloud use case
Suppose it is required to recover data stored on a WEKA filesystem due to a complete or partial loss of the data within it. You can use a data snapshot saved to an object store to recreate the same data in the snapshot on the same or another WEKA cluster.
This use case supports backup in any of the following WEKA system deployment modes:
Local object store: The WEKA cluster and object store are close to each other and will be highly performant during data recovery operations. The WEKA cluster can recover a filesystem from any snapshot on the object store for which it has a reference locator.
Remote object store: The WEKA cluster and object store are located in different geographic locations, typically with longer latencies between them. In such a deployment, you can send snapshots to local and remote object stores.
Local object store replicating to a remote object store: A local object store in one data center replicates data to another object store using the object store system features, such as . This deployment provides both integrated tiering and Snap-To-Object local high performance between the WEKA and the additional object store. The object store manages the data replication, enabling data survival in multiple regions.
The periodic creation and uploading of snapshots to an object store generate an archive, allowing access to past copies of data.
When any compliance or application requirement occurs, it is possible to make the relevant snapshot available on a WEKA cluster and view the content of past versions of data.
Combining a local cluster with a replicated object store in another data center allows for the following use cases:
Disaster recovery: where you can take the replicated data and make it available to applications in the destination location.
Backup: where you can take multiple snapshots and create point-in-time images of the data that can be mounted, and specific files may be restored.
In a public cloud, with a WEKA cluster running on compute instances with local SSDs, sometimes the data needs to be retained, even though ongoing access to the WEKA cluster is unnecessary. In such cases, using Snap-To-Object can save the costs of compute instances running the Weka system.
To pause a cluster, you need to take a snapshot of the data and then use Snap-To-Object to upload the snapshot to an S3-compliant object store. When the upload process is complete, the WEKA cluster instances can be stopped, and the data is safe on the object store.
To re-enable access to the data, you need to form a new cluster or use an existing one and download the snapshot from the object store.
This use case ensures data protection against cloud availability zone failures in the various clouds: AWS Availability Zones, Google Cloud Platform (GCP) Zones, and Oracle Cloud Infrastructure (OCI) Availability Domains.
In AWS, for example, the WEKA cluster can run on a single availability zone, providing the best performance and no cross-AZ bandwidth charges. Using Snap-To-Object, you can take and upload snapshots of the cluster to S3 (which is a cross-AZ service). If an AZ failure occurs, a new WEKA cluster can be created on another AZ, and the last snapshot uploaded to S3 can be downloaded to this new cluster.
Using WEKA snapshots uploaded to S3 combined with S3 cross-region replication enables the migration of a filesystem from one region to another.
On-premises WEKA deployments can often benefit from cloud elasticity to consume large quantities of computation power for short periods.
Cloud bursting requires the following steps:
Take a snapshot of an on-premises WEKA filesystem.
Upload the data snapshot to S3 at AWS using Snap-To-Object.
Create a WEKA cluster in AWS and make the data uploaded to S3 available to the newly formed cluster at AWS.
Process the data in-cloud using cloud compute resources.
Optionally, you may also promote data back to on-premises by doing the following:
Take a snapshot of the WEKA filesystem in the cloud on completion of cloud processing.
Upload the cloud snapshot to the on-premises WEKA cluster.
When uploading a snapshot to an object store, adhere to the following requirements:
WEKA supports simultaneous uploading multiple snapshots from different filesystems to remote and local object stores.
A writeable snapshot is a clone of the live filesystem or other snapshots at a specific time, and its data keeps changing. Therefore, its data is tiered according to the tiering policies but cannot be uploaded to the object store as a read-only snapshot.
For space and bandwidth efficiency, it is highly recommended that snapshots be uploaded in chronological order to the remote object store.
Uploading all snapshots or the same snapshots to a local object store is not required. However, once a snapshot is uploaded to the remote object store (a monthly snapshot), uploading a previous snapshot (for example, the daily snapshot before it) to the remote object store could be more efficient.
You cannot delete a snapshot that is parallel to one uploaded to the same filesystem. Because uploading a snapshot to a remote object store can take a while, it is recommended to delete the required snapshots before uploading to the remote object store.
This requirement is critical when uploading snapshots to the local and remote object stores in parallel. Consider the following:
A remote upload is in progress.
A snapshot is deleted.
Later, the snapshot is uploaded to the local object store.
In this scenario, the local snapshot upload waits for the pending deletion of the snapshot, which occurs only once the remote snapshot upload is done.
You can pause or abort a snapshot upload using the commands described in the background tasks section if required.
Synchronous snapshots are point-in-time backups for filesystems. When taken, they consist only of the changes since the last snapshot. When you download and restore a synchronous snapshot to a live filesystem, the system reconstructs the filesystem on the fly with the changes since the previous snapshot.
This capability for filesystem snapshots potentially makes them more cost-effective because you do not have to update the entire filesystem with each snapshot. You only update the changes since the last snapshot.
It is recommended that the synchronous snapshots be downloaded in chronological order.
The Synchronous Snap feature, which allows incremental snapshots to be downloaded from an object store, was temporarily disabled in version 4.2.3. It has been re-enabled in version 4.3.0.
Deleting a snapshot uploaded from a filesystem removes all its data from the local object store bucket. It does not remove any data from a remote object store bucket.
If the snapshot has been (or is) downloaded and used by a different filesystem, that filesystem stops functioning correctly, data can be unavailable, and errors can occur when accessing the data.
Before deleting the downloaded snapshot, it is recommended to either un-tier or migrate the filesystem to a different object store bucket.
Snap-To-Object and tiering use SSDs and object stores for data storage. The WEKA system uses the same paradigm for holding SSD and object store data for both Snap-To-Object and tiering to save storage and performance resources.
You can implement this paradigm for each filesystem using one of the following use cases:
Data resides on the SSDs only, and the object store is used only for the various Snap-To-Object use cases, such as backup, archiving, and bursting: The allocated SSD capacity must be identical to the filesystem size (total capacity) for each filesystem. The drive retention period must be defined as the longest time possible (which is 60 months). The Tiering Cue must be defined using the same considerations based on IO patterns. In this case, the applications always work with a high-performance SSD storage system and use the object store only as a backup device.
Snap-To-Object on filesystems is used with active tiering between the SSDs and the object store: Objects in the object store are used to tier all data and back up using Snap-To-Object. If possible, the WEKA system uses the same object for both purposes, eliminating the unnecessary need to acquire additional storage and copy data.
Related topics
This page provides a detailed description of how data storage is managed in tiered WEKA system configurations.
This page provides an in-depth explanation for the Data Lifecycle Management overview section.
The Drive Retention Period policy refers to the amount of time you want to keep a copy of the data on SSD that you previously offloaded/copied to the object storage via the Tiering Cue Policy described further below.
Consider a scenario of a 100 TB filesystem (total capacity), with 100 TB of SSD space (as explained in The role of SSDs in tiered configurations section). If the data Drive Retention Period policy is defined as 1 month and only 10 TB of data are written per month, it will probably be possible to maintain data from the last 10 months on the SSDs. On the other hand, if 200 TB of data is written per month, it will only be possible to maintain data from half of the month on the SSDs. Additionally, there is no guarantee that the data on the SSDs is the data written in the last 2 weeks of the month, which also depends on the Tiering Cue.
To further help describe this section, let us use an example where the described below is set to 1 day, and the Drive Retention Period is set to 3 days. After one day, the WEKA system offloads period 0’s data to the object store. Setting the Drive Retention Period to 3 days means leaving a copy of that data in WEKA Cache for three days, and after three days, it is removed from the WEKA Cache. The data is not gone, it is on the object store, and if an application or a user accesses that data, it is pulled back from the object store and placed back on the WEKA SSD tier where it is tagged again with a new Tiering Cue Policy Period.
Consequently, the drive Retention Period policy determines the resolution of the WEKA system release decisions. If it is set to 1 month and the SSD capacity is sufficient for 10 months of writing, then the first month will be kept on the SSDs.
The Tiering Cue policy defines the period of time to wait before the data is copied from the SSD and sent to the object store. It is typically used when it is expected that some of the data being written will be rewritten/modified/deleted in the short term.
The WEKA system integrates a rolling progress control with three rotating periods of 0, 1, and 2.
Period 0: All data written is tagged as written in the current period.
Period 1: The switch from 0 to 1 is according to the Tiering Cue policy.
Period 2: Starts after the period of time defined in the Tiering Cue, triggering the transfer of data written in period 0 from the SSD to the object store.
Example:
If the Tiering Cue Policy is set to 1 day, all data written within the first day is tagged for Period 0. After one day, and for the next day, the next set of data is tagged for Period 1, and the data written the next day is tagged for Period 2.
As Period 0 rolls around to be next, the data marked for Period 0 is offloaded to the object store, and new data is then tagged for Period 0. When Period 1 rolls around to be next, it is time to offload the data tagged for Period 1 to the object store and so on.
One important caveat to mention is that in the above example, if none of the data is touched or modified during the time set for the Tiering Cue Policy, then all the data as described will offload to the object store as planned. But let’s say there is some data in Period 0 that was updated/modified, that data is pulled out of Period 0 and is then tagged with the current Period of data being written at the moment, let’s say that is Period 2. So now, that newly modified data will not get offloaded to the object store until it is Period 2’s time. This is true for any data modified residing in one of the 3 Period cycles. It will be removed from its original Period and placed into the current Period marking the active writes.
Since the WEKA system is a highly scalable data storage system, data storage policies in tiered WEKA configurations cannot be based on cluster-wide FIFO methodology, because clusters can contain billions of files. Instead, drive retention is managed by time-stamping every piece of data, where the timestamp is based on a resolution of intervals that may extend from minutes to weeks. The WEKA system maintains the interval in which each piece of data was created, accessed, or last modified.
Users only specify the Drive Retention Period and based on this, each interval is one-quarter of the Drive Retention Period. Data written, modified, or accessed prior to the last interval is always released, even if SSD space is available.
Example:
In a WEKA system configured with a Drive Retention Period of 20 days, data is split into 7 interval groups, each spanning 5 days in this scenario (5 is 25% of 20, the Drive Retention Period).
If the system starts operating on January 1, data written, accessed, or modified between January 1-5 are classified as belonging to interval 0, data written, accessed, or modified between January 6-10 belongs to interval 1, and so on. In such a case, the 7 intervals will be timestamped and divided as follows:
In the above scenario, there are seven data intervals on the SSDs (the last one is accumulating new/modified data). In addition, another interval is currently being released to the object-store. Yes, the retention period is almost twice as long as the user specifies, as long as there is sufficient space on the SSD. Why? If possible, it provides better performance and reduces unnecessary release/promotion of data to/from the object-store if data is modified.
At any given moment, the WEKA system releases the filesystem data of a single interval, transferring it from the SSD to the object-store. The release process is based on data aging characteristics (as implemented through the intervals system and revolving tags). Consequently, if there is sufficient SSD capacity, only data modified or written before seven intervals will be released. The release process also considers the amount of available SSD capacity through the mechanism of Backpressure. Backpressure works against two watermarks - 90% and 95%. It kicks in when SSD utilization per file system crosses above 95% and stops when it crosses below 90%. It's also important to understand that Backpressure works in parallel and independently of the Tiering Policy. If the SSD utilization crosses the 95% watermark, then data will be released from SSD and sent to the object-store sooner than was configured.
Example:
If 3 TB of data is produced every day, i.e., 15 TB of data in each interval, the division of data will be as follows:
Now consider a situation where the total capacity of the SSD is 100 TB. The situation in the example above will be as follows:
Since the resolution in the WEKA system is the interval, in the example above the SSD capacity of 100 TB is insufficient for all data written over the defined 35-day Retention Period. Consequently, the oldest, most non-accessed, or modified data, has to be released to the object store. In this example, this release operation will have to be performed in the middle of interval 6 and will involve the release of data from interval 0.
This counting of the age of the data in resolutions of 5 days is performed according to 8 different categories. A constantly rolling calculation, the following will occur in the example above:
Data from days 1-30 (January 1-30) will all be on the SSD. Some of it may be tiered to the object store, depending on the defined Tiering Cue.
Data from more than 35 days will be released to the object store.
Data from days 31-35 (January 31-February 4) will be partially on the SSD and partially tiered to the object store. However, there is no control over the order in which data from days 31-35 is released to the object store.
Example: If no data has been accessed or modified since creation, then the data from interval 0 will be released and the data from intervals 1-6 will remain on the SSDs. If, on the other hand, 8 TB of data is written every day, meaning that 40 TB of data is written in each interval (as shown below), then the last two intervals, i.e., data written, accessed, or modified in a total of 10 days will be kept on the SSD, while other data will be released to the object-store.
Now consider the following filesystem scenario, where the whole SSD storage capacity of 100 TB is utilized in the first 3 intervals:
When much more data is written and there is insufficient SSD capacity for storage, the data from interval 0 will be released when the 100 TB capacity is reached. This represents a violation of the Retention Period. In such a situation, it is also possible to either increase the SSD capacity or reduce the Retention Period.
The tiering process (the tiering of data from the SSDs to the object stores) is based on when data is created or modified. It is managed similar to the Drive Retention Period, with the data timestamped in intervals. The length of each interval is the size of the user-defined Tiering Cue. The WEKA system maintains 3 such intervals at any given time, and always tiers the data in the third interval. Refer to the example provided in the "Tiering Cue Policy" section above for further clarity.
Example: If the Tiering Cue is 1 day, then the data will be classified according to the following timeline for a system that starts working on January 1:
Since the tiering process applies to data in the first interval in this example, the data written or modified on January 1 will be tiered to the object store on January 3. Consequently, data will never be tiered before it is at least 1 day old (which is the user-defined Tiering Cue), with the worst case being the tiering of data written at the end of January 1 at the beginning of January 3.
If it is impossible to maintain the defined Retention Period or Tiering Cue policies, a TieredFilesystemBreakingPolicy event will occur, and old data will be released to free space on the SSDs. Users are alerted to such a situation through an ObjectStoragePossibleBottleneck event, enabling them to consider either raising the bandwidth or upgrading the object store performance.
Regardless of the time-based policies, it is possible to use a special mount option to bypass the time-based policies. Any creation or writing of files from a mount point with this option marks it to release as soon as possible before considering other file retention policies. The data extents of the files are still first written to the SSD but get precedence on releasing to the object store.
In addition, any read done through such a mount point reads the extent from the object store and is not kept persistently on the SSD (it still goes through the SSD but is released immediately before any other interval).
In AWS, this mode should only be used for importing data. It should not be used for general access to the filesystem as any data read via this mount point would be immediately released from the SSD tier again. This can lead to excessive S3 charges.
Access time, often called "atime," is a file system metadata attribute that tracks the most recent instance when a file was accessed or read. This attribute is essential for monitoring and managing file usage, as it records when a file was last opened or viewed by a user or an application.
In the WEKA filesystem, the atime is updated locally on the container where the read operation took place, and this update is subsequently propagated to the cluster after the user closes the file. This update process doesn't occur immediately and may take up to 60 minutes to reflect the actual access time.
POSIX mount options that affect atime behavior, such as relatime, are supported. However, this updated atime still takes time to propagate, even if mounted with strictatime.
Directory atimes are currently not supported, therefore, listing a directory's contents does not update its atime.
The WEKA agent is software installed on user application servers that need access to the WEKA file services. When using the Stateless Client feature, the agent ensures that the correct client software version is installed (depending on the cluster version) and that the client connects to the correct cluster.
A backend server in the context of WEKA is a server equipped with SSD drives and running the WEKA software. These servers are dedicated to the WEKA system, offering services to clients. A storage cluster is formed by a group of such backend servers, collectively providing storage and processing capabilities within the WEKA infrastructure.
The WEKA client is software installed on user application servers that need access to WEKA file services. The WEKA client implements a kernel-based filesystem driver and the logic and networking stack to connect to the WEKA backend servers and be part of a cluster. In general industry terms, "client" may also refer to an NFS, SMB, or S3 client that uses those protocols to access the WEKA filesystem. For NFS, SMB, and S3, the WEKA client is not required to be installed in conjunction with those protocols.
A collection of WEKA backend servers, together with WEKA clients installed on the application servers, forming one shareable, distributed, and scalable file storage system.
WEKA uses Linux containers (LXC) as the mechanism for holding one process or keeping multiple processes together. Containers can have different processes within them. They can have frontend processes and associated DPDK libraries within the container, compute processes, drive processes, a management process, and DPDK libraries, or NFS, SMB, or S3 services running within them. A server can have multiple containers running on it at any time.
A WEKA configuration in which WEKA backend containers run on the same server with applications.
The target period of time for tiered data to be retained on an SSD.
The number of data blocks in each logical data protection group.
A WEKA configuration that dedicates complete servers and all of their allocated resources to WEKA backends, as opposed to a converged deployment.
A collection of hardware components that can fail together due to a single root cause.
A collection of filesystems that share a common tiering policy to object-store.
It is the collection of WEKA software that runs on a client and accesses storage services and IO from the WEKA storage cluster. The frontend consists of a process that delivers IO to the WEKA driver, a DPDK library, and the WEKA POSIX driver.
The term "host" is deprecated. See .
Frequently used data (as opposed to warm data), usually residing on SSDs.
In distributed systems, a leader is a process that assumes a special role, often responsible for coordination, synchronization, and making decisions on behalf of the cluster. The leader plays a crucial role in maintaining consistency and order among the distributed processes or nodes in the system. If the leader fails or is replaced, a new leader is typically elected to ensure the continued operation of the distributed system.
Within the context of WEKA, at the cluster's core resides the cluster leader, serving as the singular WEKA management process within the cluster. This unique role grants the cluster leader the exclusive capability to initiate and disseminate configuration changes throughout the entire cluster.
The term "machine" is deprecated. See .
Amount of space available for user data on SSDs in a configured WEKA system.
The term "node" is deprecated. See .
Object Storage. WEKA uses object storage buckets to extend the WEKA filesystem and to store uploaded file system snapshots.
POSIX (Portable Operating System Interface) is a set of standards established by the IEEE Computer Society to ensure compatibility across diverse operating systems. The WEKA client adheres to the POSIX specifications, ensuring that it interacts with the underlying operating system following the defined POSIX standard. This compliance ensures seamless interoperability and consistent behavior, making the WEKA client often referred to as the POSIX client or POSIX driver when discussing the broader storage system architecture.
A software instance that WEKA uses to run and manage the filesystem. Processes are dedicated to managing different functions such as (1) NVMe Drives and IO to the drives, (2) compute processes for filesystems and cluster-level functions and IO from clients, (3) frontend processes for POSIX client access and sending IO to the compute process and (4) management processes for managing the overall cluster.
The total capacity that is assigned to filesystems. This includes both SSD and object store capacity.
Prefetch in WEKA involves proactively promoting data from an object store to an SSD based on predictions of future data access. This process anticipates and preloads data onto faster storage, optimizing performance by ensuring that relevant information is readily available when needed.
Promoting refers to the action of moving data from a lower-tier storage, typically an object store, to a more accessible storage medium, such as an SSD, when the data is required for active use. This process aims to enhance performance by ensuring that frequently accessed or critical data is readily available on a faster storage tier.
Total SSD capacity owned by the user.
See .
The designated time duration for data to be stored on SSDs before releasing from the SSDs to an object store.
Releasing, in the context of data tiering, refers to deleting the SSD copy of data that has been migrated to the object store.
A physical or virtual server that has hardware resources allocated to it and software running on it that provides compute or storage services. WEKA uses backend servers in conjunction with clients to deliver storage services. In general industry terms, in a cluster of servers, sometimes the term node is used instead.
SR-IOV (Single Root I/O Virtualization) is a technology that enables a single physical resource to be leveraged as multiple virtual resources. In essence, SR-IOV facilitates the partitioning of a single hardware component into distinct virtual functions, each operating independently. Correspondingly, the term Virtual Function (VF) aligns with SR-IOV, referring to these individualized virtualized entities. This technology is particularly valuable in optimizing resource utilization and enhancing the efficiency of virtualized environments.
Stem Mode in WEKA refers to the installed and running software that has not yet been attached to a cluster.
Snap-To-Object is a WEKA feature facilitating the uploading of snapshots to object stores.
A tiered WEKA configuration combines SSDs and object stores for data storage.
Tiering is the dynamic process of copying data from an SSD to an object store while retaining the original copy on the SSD. This optimization strategy balances performance and cost considerations by keeping frequently accessed data on the high-performance SSD and moving less accessed data to a more economical object store.
Tiering Cue refers to the minimum duration that must elapse before considering data migration from an SSD to an object store. This time threshold is crucial in the context of data tiering strategies, where the decision to move data between different storage tiers is based on factors such as access frequency, performance requirements, and cost considerations. The Tiering Cue helps establish a timeframe for evaluating whether data should be transitioned from the faster but potentially more expensive SSD storage to the object store, which may offer more cost-effective, albeit slower, storage.
Unprovisioned capacity refers to the storage space that is currently unused and available for the creation of new filesystems or data storage allocations. This term indicates the portion of storage resources that have not been assigned or allocated to any specific purpose, making it ready and waiting to be provisioned for new file systems or data storage needs.
Virtual Function (VF) in the context of WEKA typically denotes the creation of multiple virtual instances of a physical network adapter. This involves leveraging SR-IOV (Single Root I/O Virtualization) technology, where a single physical resource can be partitioned into distinct virtual functions, each capable of independent operation. In essence, both Virtual Function and SR-IOV are terms integral to WEKA's approach to optimizing resource allocation and enhancing the efficiency of virtualized network environments by enabling the creation of multiple independent virtual instances from a single physical network adapter.
Warm data is less frequently accessed or utilized data, unlike hot data, and is typically stored in an object store. This term is used to describe information that is accessed less regularly but remains relevant for specific use cases. Storing warm data on an object store allows for efficient management of data resources, providing a balance between accessibility and storage costs.
WEKA GUI application enables you to configure, administer, and monitor the WEKA system. This page provides an overview of the primary operations, access to the GUI, and system dashboard.
This page describes how to manage snapshots using the CLI.
Using the CLI, you can:
$ curl https://<token>@get.weka.io/dist/v1/release
{
"num_results" : 8,
"page" : 1,
"page_size" : 50,
"num_pages" : 1,
"objects" : [
{
"id" : "3.6.1",
"public" : true,
"final" : true,
"trunk_id" : "",
"s3_path" : "releases/3.6.1"
.
.
.
},
...
]
}$ spec='
{
"cluster": [
{
"role": "backend",
"instance_type": "i3en.2xlarge",
"count": 10
},
{
"role": "client",
"instance_type": "r3.xlarge",
"count": 2
}
]
}
'
$ curl -X POST -H 'Content-Type: application/json' -d "$spec" https://<token>@get.weka.io/dist/v1/aws/cfn/3.6.1
{
"url" : "https://wekaio-cfn-templates-prod.s3.amazonaws.com/cjibjp7ps000001o9pncqywv6.json",
"quick_create_stack" : {
"ap-southeast-2" : "...",
...
}
}$ spec='...' # same as above
$ curl -X POST -H 'Content-Type: application/json' -d "$spec" https://<token>@get.weka.io/dist/v1/aws/cfn/3.6.1?type=template
{"AWSTemplateFormatVersion": "2010-09-09", ...M6idn
m6idn.xlarge, m6idn.2xlarge, m6idn.4xlarge, m6idn.8xlarge, m6idn.12xlarge, m6idn.16xlarge, m6idn.24xlarge, m6idn.32xlarge
M6in
m6in.xlarge , m6in.2xlarge , m6in.4xlarge , m6in.8xlarge , m6in.12xlarge , m6in.16xlarge , m6in.24xlarge
M7a
m7a.xlarge , m7a.2xlarge, m7a.4xlarge, m7a.8xlarge, m7a.12xlarge, m7a.16xlarge, m7a.24xlarge, m7a.32xlarge, m7a.48xlarge
C6a
c6a.2xlarge, c6a.4xlarge, c6a.8xlarge, c6a.12xlarge, c6a.16xlarge, c6a.32xlarge, c6a.48xlarge
C6in
c6in.2xlarge, c6in.4xlarge, c6in.8xlarge, c6in.12xlarge, c6in.16xlarge, c6in.24xlarge, c6in.32xlarge
C7i
c7i.2xlarge, c7i.4xlarge, c7i.8xlarge, c7i.12xlarge, c7i.16xlarge, c7i.24xlarge, cC7i.48xlarge
R6idn
r6idn.xlarge, r6idn.2xlarge, r6idn.4xlarge, r6idn.8xlarge, r6idn.12xlarge, r6idn.16xlarge, r6idn.24xlarge, r6idn.32xlarge
R6in
r6in.xlarge, r6in.2xlarge, r6in.4xlarge, r6in.8xlarge, r6in.12xlarge, r6in.16xlarge, r6in.24xlarge, r6in.32xlarge
X1
x1.16xlarge, x1.32xlarge
X1e
x1e.16xlarge, x1e.32xlarge
P2
p2.xlarge, p2.8xlarge, p2.16xlarge
P3
p3.2xlarge, p3.8xlarge, p3.16xlarge
P4
p4d.24xlarge, p4de.24xlarge
P5
p5.48xlarge
Trn1
trn1.2xlarge, trn1.32xlarge , trn1n.32xlarge
G3
g3.4xlarge, g3.8xlarge, g3.16xlarge
G4
g4dn.2xlarge, g4dn.4xlarge, g4dn.8xlarge, g4dn.12xlarge, g4dn.16xlarge
G5
g5.xlarge, g5.2xlarge, g5.4xlarge, g5.8xlarge, g5.12xlarge, g5.16xlarge
HPc7a
hpc7a.2xlarge, hpc7a.48xlarge, hpc7a.96xlarge
I3
i3.xlarge, i3.2xlarge, i3.4xlarge, i3.8xlarge, i3.16xlarge
I3en
i3en.xlarge, i3en.2xlarge, i3en.3xlarge, i3en.6xlarge, i3en.12xlarge, i3en.24xlarge
Inf1
inf1.2xlarge, inf1.6xlarge, inf1.24xlarge
Inf2
inf2.xlarge, inf2.8xlarge, inf2.24xlarge, inf2.48xlarge
M5
m5.xlarge, m5.2xlarge, m5.4xlarge, m5.8xlarge, m5.12xlarge, m5.16xlarge, m5.24xlarge
M5n
m5n.xlarge, m5n.2xlarge, m5n.4xlarge, m5n.8xlarge, m5n.12xlarge, m5n.16xlarge, m5n.24xlarge, m5dn.xlarge, m5dn.2xlarge, m5dn.4xlarge, m5dn.8xlarge, m5dn.12xlarge, m5dn.16xlarge, m5dn.24xlarge
M6a
m6a.xlarge, m6a.2xlarge, m6a.4xlarge, m6a.8xlarge, m6a.12xlarge, m6a.16xlarge, m6a24xlarge, m6a.32xlarge, m6a.48xlarge
M6i
m6i.xlarge, m6i.2xlarge, m6i.4xlarge, m6i.8xlarge, m6i.12xlarge, m6i.16xlarge, m6i.24xlarge, m6i.32xlarge
M6id
m6id.xlarge, m6id.2xlarge, m6id.4xlarge, m6id.8xlarge, m6id.12xlarge, m6id.16xlarge, m6id.24xlarge, m6id.32xlarge
M6idn
m6idn.xlarge, m6idn.2xlarge, m6idn.4xlarge, m6idn.8xlarge, m6idn.12xlarge, m6idn.16xlarge, m6idn.24xlarge, m6idn.32xlarge
P2
p2.xlarge, p2.8xlarge, p2.16xlarge
P3
p3.2xlarge, p3.8xlarge, p3.16xlarge
P4
p4d.24xlarge, p4de.24xlarge
R5
r5.xlarge, r5.2xlarge, r5.4xlarge, r5.8xlarge, r5.12xlarge, r5.16xlarge, r5.24xlarge
R5n
r5n.xlarge, r5n.2xlarge, r5n.4xlarge, r5n.8xlarge, r5n.12xlarge, r5n.16xlarge, r5n.24xlarge
R6a
r6a.xlarge, r6a.2xlarge, r6a.4xlarge, r6a.8xlarge, r6a.12xlarge, r6a.16xlarge, r6a.32xlarge, r6a.48xlarge
R6i
r6i.xlarge, r6i.2xlarge, r6i.4xlarge, r6i.8xlarge, r6i.12xlarge, r6i.16xlarge, r6i.24xlarge, r6i.32xlarge
R6id
r6id.xlarge, r6id.2xlarge, r6id.4xlarge, r6id.8xlarge, r6id.12xlarge, r6id.16xlarge, r6id.24xlarge, r6id.32xlarge
R6idn
r6idn.xlarge, r6idn.2xlarge, r6idn.4xlarge, r6idn.8xlarge, r6idn.12xlarge, r6idn.16xlarge, r6idn.24xlarge, r6idn.32xlarge
R6in
r6in.xlarge, r6in.2xlarge, r6in.4xlarge, r6in.8xlarge, r6in.12xlarge, r6in.16xlarge, r6in.24xlarge, r6in.32xlarge
X1
x1.16xlarge, x1.32xlarge
X1e
x1e.16xlarge, x1e.32xlarge
grace
Specify the grace period before the soft limit is treated as a hard limit.
Format: 1d, 1w, unlimited
unlimited
owner
An opaque string identifying the directory owner (can be a name, email, slack ID, etc.) This owner will be shown in the quota report and can be notified upon exceeding the quota. Supports up to 48 characters.
under
A path to a directory under a wekafs mount. The relevant filesystem must be mounted in the server running the query.
over
Shows only quotas over this percentage of usage.
Possible values: 0-100
quick
Do not resolve inode to a path. Provides quicker results if the report contains many entries.
False
all
Shows all the quotas, not just the exceeding ones.
False
ssd-capacity
For tiered filesystems, this is the SSD capacity. If not specified, the filesystem is pinned to SSD.
To set a thin provisioned filesystem, the thin-provision-min-ssd attribute must be used instead.
SSD capacity is set to total capacity
thin-provision-min-ssd
For thin-provisioned filesystems, this is the minimum SSD capacity that is ensured to be always available to this filesystem. Must be set when defining a thin-provisioned filesystem. Minimum value: 1GiB.
thin-provision-max-ssd
For thin-provisioned filesystem, this is the maximum SSD capacity the filesystem can consume.
The value cannot exceed the total-capacity.
max-files
Metadata allocation for this filesystem. Automatically calculated by the system based on the SSD capacity.
encrypted
Encryption of filesystem
No
obs-name*
Object store name for tiering. Mandatory for tiered filesystems.
auth-required
Determines if mounting the filesystem requires to be authenticated to WEKA (see User management).
No
data-reduction
Enable data reduction. The filesystem must be non-tired and thin-provisioned. A license with data reduction is required.
No
thin-provision-max-ssd
For thin-proviosined filesystem, this is the maximum SSD capacity the filesystem can consume.
The value must not exceed the total-capacity.
max-files
Metadata limit for the filesystem.
auth-required
Determines if mounting the filesystem requires being authenticated to WEKAka (weka user login).
Possible values: yes or no.

none: WEKA never uses Cgroups, even if it runs on an OS with CgroupsV1.
cgroups_mode=force_v2
cgroups_mode=none
Restart the WEKA agent service.
Verify the Cgroups setting by running the weka local status command.
backend-hostname*
An existing hostname (IP or FQDN) of one of the existing backend instances in the cluster.
client-hostname*
A unique hostname (IP or FQDN) of the client to add.
Name
Value
container-id*
A valid identifier of the container to add to the cluster.
cores*
The number of physical cores to allocate to the WEKA client.
frontend-dedicated-cores*
The number of physical cores to be dedicated to frontend processes.
Mandatory to configure a container as a client.
Maximum 19 cores.
For clients, the number of total cores and frontend-dedicated-cores must be equal.
container-id*
A valid identifier of the container to add to the cluster.
device*
A valid network interface device name (for example, eth1).
ips*
A valid IP address of the new interface.
gateway
The IP address of the default routing gateway.
The gateway must reside within the same IP network of ips (as described by netmask).
Not relevant for IB / L2 non-routable networks.
netmask
The number of bits that identify a network ID (also known as CIDR). For example, the netmask of 255.255.0.0 has 16 netmask bits.
container-id*
A comma-separated string of valid identifiers of the containers to add to the cluster.
force
A boolean indicates not to prompt for confirmation. The default is not to force a prompt.
curl http://Backend-1:14000/dist/v1/install | WEKA_CGROUPS_MODE=none sh[root@weka-cluster] #weka local status
Weka v4.2.0 (CLI build 4.2.0)
Cgroups: mode=auto, enabled=true
Containers: 1/1 running (1 weka)
Nodes: 2/2 running (2 READY)
Mounts: 1# Agent Installation (one time)
curl http://Backend-1:14000/dist/v1/install | sh
# Creating a mount point (one time)
mkdir -p /mnt/weka
# Mounting a filesystem (DPDK mount example):
mount -t wekafs -o net=eth1 backend-1/my_fs /mnt/wekaweka -H <backend-hostname> cluster container add <client-hostname>weka cluster container cores <container-id> <cores> --frontend-dedicated-cores=<frontend-dedicated-cores>weka cluster container net add <container-id> <device> --ips=<ips> --netmask=<netmask> --gateway=<gateway>weka cluster container apply <container-id> [--force]Hot spare: Define the required hot spare count described in Hot Spare.
Planned hot spare.
Operating System
The maximum between 8 GB and 2% from the total RAM
Additional protocols (NFS/SMB/S3)
16 GB
RDMA
2 GB
Metadata (pointers)
20 Bytes x # Metadata units per server See .
Total net capacity: 725,000 GB
NFS/SMB services
RDMA
Average file size: 1 MB (potentially up to 755 million files for all servers; ~47 million files per server)
Additional protocols = 16 GB
RDMA = 2 GB
Metadata: 20 Bytes x 47 million files x 2 units = ~1.9 GB
Allocate enough cores to support performance targets.
Generally, use 1 drive process per SSD for up to 6 SSDs and 1 drive process per 2 SSDs for more, with a ratio of 2 compute processes per SSD process.
For finer tuning, please contact the Customer Success Team.
Allocate enough memory to match core allocation, as discussed above.
Running other applications on the same server (converged WEKA system deployment) is supported. For details, contact the Customer Success Team.
Fixed
2.8 GB
Frontend processes
2.2 GB x # of Frontend processes
Compute processes
3.9 GB x # of Compute processes
Drive processes
2 GB x # of Drive processes
SSD capacity management
(Total SSD Raw Capacity / Number of Servers / 2,000) + (Number of Cores x 3 GB)
Configure the cluster, such as data availability, license, security, and central monitoring.
Configure the backend containers and expose the data in different protocols.
Manage local users and set up the user directory.
Create and manage organizations and their quotas.
Management:
Manage the filesystems, including tiering, thin provisioning, and encryption.
Manage snapshots.
Manage the object store buckets.
Manage the filesystem protocols: SMB, S3, and NFS.
Manage directory quotas.
Investigation:
Investigate events.
Investigate overtime statistics, such as total operations, R/W throughput, CPU usage, and read or write latency.
Monitoring:
View the cluster protection and availability.
View the R/W throughput.
View the backend and client top consumers.
View alarms.
View the used, provisioned, and total capacity.
View the frontend, compute, and drive cores usage.
View the hardware components (active/total).
WEKA GUI is a web application you can access using an already configured account and has the appropriate rights to configure, administer, or view.
You can access the WEKA GUI with any standard browser using the address:
https://<weka system or server name>:14000
For example: https://WekaProd:14000 or https://weka01:14000.
Before you begin
Make sure that port 14000 is open in the firewall of your organization.
Procedure
In your browser, go to https://<weka system or server name>:14000.
The sign-in page opens.
Sign in with the username and password of an account with cluster administration or organization administration privileges. For details about the account types, see User management in the related topics.
The system dashboard opens.
Related topics
The system dashboard contains widgets that provide an overview of the WEKA system, including an overall status, R/W throughput, top consumers, alerts, capacity, core usage, and hardware.
The system dashboard opens by default when you sign in. If you select another menu and want to display the dashboard again, select Monitor > System Dashboard, or click the WEKA logo.
This widget shows the overall status of the system's health and protection state.
The overall status widget includes the following indications:
Protection state: The possible protection states include:
OK: The system operates properly.
UNKNOWN: The protection state is unknown.
UNINITIALIZED: The system must complete the cluster configuration and run the first IOs.
REBUILDING: When a failure occurs, the data rebuild process reads all the stripes where the failure occurred, rebuilds the data, and returns the system to full protection.
PARTIALLY_PROTECTED: Some or all of the data is not fully protected. The reported number of protections indicates the cluster's failure resilience.
UNPROTECTED: The data is not protected against any failure.
UNAVAILABLE: Too many parallel failures occur in the system that can cause system unavailability.
REDISTRIBUTING: The system redistributes the data between servers and drives due to scale-up or scale-down.
Service Uptime: The time that has elapsed since the I/O services started.
Data Protection: The number of data drives and protection parity drives. The color of the protection parity drives indicates their status.
Virtual (Hot) Spares: The number of failure domains the system can lose and still complete the data rebuild while maintaining the same net capacity.
This widget shows the current performance statistics aggregated across the cluster.
The R/W Throughput widget includes the following indications:
Throughput: The total throughput.
Total Ops: The number of cluster operations.
Latency: The average latency of R/W operations.
Active clients: The number of clients connected to the cluster.
This widget shows the top 5 backend servers and clients in the system. You can sort the list of servers by total IO operations per second or total throughput.
This widget shows the alerts that are not muted.
This widget shows an overview of the managed capacity.
The top bar indicates the total capacity provisioned for all filesystems and the used capacity. For tiered filesystems, the total capacity also includes the Object Store part.
The bottom bar indicates the total SSD capacity available in the system, the provisioned capacity, and the used capacity.
This widget shows the average usage and the maximum load level of the Frontend, Compute, and Drive cores. Hovering the maximum value displays the most active server and the NodeID number.
This widget shows an overview of the hardware components (active/total).
The hardware components include:
Backends: The number of backend servers.
Cores: The number of cores configured for running processes in the backend servers.
Drives: The number of drives.
OBS Buckets: The number of the object store buckets.
Timestamps in events and statistics are logged internally in UTC. Weka GUI displays the timestamps in local or system time. You can switch between the local and system time.
Switching the display time may be required when the customer, Weka support, and the Weka system are in different time zones. In this situation, the customer and Weka support can switch the display to system time instead of local time so both view the identical timestamps.
Procedure
On the top bar, point to the timestamp.
Depending on the displayed time, select Switch to System Time or Switch to Local Time.
You can switch the GUI between light and dark modes according to your preferences. The dark mode is a user interface for content that displays light text on a dark background. The dark mode is beneficial for viewing screens at night. The reduced brightness can reduce eye strain in low-light conditions.
Procedure
Depending on the current display mode, point to the sun or moon symbol on the top bar.
Select Switch to the light mode or Switch to dark mode.
You can switch the view of the servers to 3D for the backend servers, NFS servers, S3 servers, and SMB servers.
The 3D view provides the server components' status at a glance, including the drives, cores, protocols, and load. The colors indicate, for example, if the drives or processes failed or the container is down.
When managing filesystems, snapshots, and object stores, the displayed tables listing the rows have two behaviors in common.
The table title also specifies the table's number of rows and the maximum number of rows the table can display.
You can customize the columns displayed on the table using the column selector.
Command: weka fs snapshot create
Use the following command line to create a snapshot:
weka fs snapshot create <file-system> <name> [--access-point access-point] [--source-snap=<source-snap>] [--is-writable]
Parameters
Name
Type
Value
Limitations
Mandatory
Default
file-system
String
A valid filesystem identifier
Must be a valid name
Yes
name
String
Command: weka fs snapshot delete
Use the following command line to delete a snapshot:
weka fs snapshot delete <file-system> <name>
Parameters
Name
Type
Value
Limitations
Mandatory
Default
file-system
String
A valid filesystem identifier
Must be a valid name
Yes
name
String
A snapshot deletion cannot happen parallel to a snapshot upload to the same filesystem. Since uploading a snapshot to a remote object store might take a while, it is advisable to delete the desired snapshots before uploading to the remote object store.
This becomes more important when uploading snapshots to local and remote object stores. While local and remote uploads can progress in parallel, consider the case of a remote upload in progress, then a snapshot is deleted, and later a snapshot is uploaded to the local object store. In this scenario, the local snapshot upload waits for the pending deletion of the snapshot (which happens only once the remote snapshot upload is done).
Commands: weka fs restore or weka fs snapshot copy
Use the following command line to restore a filesystem from a snapshot:
weka fs restore <file-system> <source-name> [--preserved-overwritten-snapshot-name=preserved-overwritten-snapshot-name] [--preserved-overwritten-snapshot-access-point=preserved-overwritten-snapshot-access-point]
Use the following command line to restore a snapshot to another snapshot:
weka fs snapshot copy <file-system> <source-name> <destination-name> [--preserved-overwritten-snapshot-name=preserved-overwritten-snapshot-name] [--preserved-overwritten-snapshot-access-point=preserved-overwritten-snapshot-access-point]
Parameters
file-system*
A valid filesystem identifier
source-name*
Unique name for the source of the snapshot
destination-*name
Destination name to which the existing snapshot should be copied to.
When restoring a filesystem from a snapshot (or copying over an existing snapshot), the filesystem data and metadata are changed. If you do not specify the preserved-overwritten-snapshot-name parameter, ensure IOs to the filesystem are stopped during this time.
Command: weka fs snapshot update
This command changes the snapshot attributes. Use the following command line to update an existing snapshot:
weka fs snapshot update <file-system> <name> [--new-name=<new-name>] [--access-point=<access-point>]
Parameters
file-system*
A valid filesystem identifier
name*
Unique name for the updated snapshot
new-name
New name for the updated snapshot
access-point
Name of a directory for the snapshot that serves as the access point for the snapshot
The .snapshots directory is located in the root directory of each mounted filesystem. It is not displayed with the ls -la command. You can access this directory using the cd .snapshots command from the root directory.
The following example shows a filesystem named default mounted to /mnt/weka.
To confirm you are in the root directory of the mounted filesystem, change into the .snapshots directory, and then display any snapshots in that directory:
This page provides an overview for WEKA CLI, including the top-level commands, command hierarchy, how to connect to another server, auto-completion, and how to check the status of the cluster.
The WEKA CLI is installed on each WEKA server and is available through the weka command. It's possible to connect to any of the servers using ssh and running the weka command. The weka command displays a list of all top-level commands.
The WEKA CLI is installed on each WEKA server and is available through the weka command. Running this command will display a list of all available top-level commands:
The options that are common to many commands include:
Most WEKA system top-level commands are the default list command for their own collection. Additional sub-commands may be available under them.
Example: The weka fs command displays a list of all filesystems and is also the top-level command for all filesystems, filesystem groups, and snapshot-related operations. It is possible to use the -h/--help flags or the help command to display a list of available commands at each level, as shown below:
Most WEKA system commands deliver the same result on all cluster servers. However, it is sometimes necessary to execute a command on a specific server. This is performed using the -H/--hostname option and specifying the hostname or IP address of the target server.
Using bash you can use auto-completion for CLI commands and parameters. The auto-completion script is automatically installed.
To disable the auto-completion script, run weka agent autocomplete uninstall
To (re-)install the script on a server, run weka agent autocomplete install and re-enter your shell session.
You can also use weka agent autocomplete exportto get the bash completions script and write it to any desired location.
The weka status command displays the overall status of the WEKA system.
Example 1: status of a healthy system
Example 2: status of a system with one backend failure (DEGRADED)
Example 3: status of a system with partial capacity allocation (unprovisioned capacity)
Example 4: status of a system with unavailable capacity due to two failed drives
This page describes how to view and manage object stores using the GUI.
Using the GUI, you can perform the following actions:
Object store buckets can reside in different physical object stores. To achieve good QoS between the buckets, WEKA requires mapping the buckets to the physical object store.
You can edit the default local and remote object stores to meet your connection demands. When you add an object store bucket, you apply the relevant object store on the bucket.
Editing the default object store provides you with the following additional advantages:
Set restrictions on downloads from a remote object store. For on-premises systems where the remote bucket is in the cloud, to reduce the cost, you set a very low bandwidth for downloading from a remote bucket.
Ease of adding new buckets. You can set the connection parameters on the object store level and, if not specified differently, automatically use the default settings for the buckets you add.
Procedure
From the menu, select Manage > Object Stores.
On the left, select the pencil icon near the default object store you want to edit.
On the Edit Object Store dialog, select the type of object store and update the relevant parameters. Select one of the following tabs according to the object store type you choose. For details, see the parameter descriptions in the topic.
It is not mandatory to set the Access Key and Secret Key in the Edit Object Store dialog in AWS. The AWS object store type is accessed from the WEKA EC2 instances to the object store and granted by the IAM roles assigned to the instances.
If you select Enable AssumeRole API, set also the Role ARN and Role Session Name. For details, see the topic.
It is not mandatory to set the Access Key and Secret Key in the Edit Object Store dialog in GCP. Google Cloud Storage is accessed using a service account attached to each Compute Engine Instance that is running WEKA software, provided that the service account has the required permissions granted by the IAM role (storage.admin for creating buckets. storage.objectAdmin for using an existing bucket ).
Add object store buckets to be used for tiering or snapshots.
Procedure
From the menu, select Manage > Object Stores.
Select the +Create button.
In the Create Object Store Bucket dialog, set the following:
Name: Enter a meaningful name for the bucket.
Object Store: Select the location of the object store. For tiering and snapshots, select the local object store. For snapshots only, select the remote object store.
WEKA supports the following options for creating AWS S3 buckets:
AWS S3 bucket creation for WEKA cluster on EC2.
AWS S3 bucket creation for WEKA cluster not on EC2 using STS.
Set the following:
Optional: If your deployment requires a specific upload and download configuration, select Advanced, and set the parameters:
Download Bandwidth: Object store download bandwidth limitation per core (Mbps).
Upload Bandwidth: Object store upload bandwidth limitation per core (Mbps).
To validate the connection to the object store bucket, select Validate.
Select Create.
The object store buckets are displayed on the Object Stores page. Each object store indicates the status, bucket name, protocol (HTTP/HTTPS), port, region, object store location (local or remote), authentication method, and error information (if it exists).
Procedure
From the menu, select Manage > Object Stores.
The following example shows two object store buckets.
You can modify the object store bucket parameters according to your demand changes.
Procedure
From the menu, select Manage > Object Stores.
Select the three dots on the right of the object store you want to modify and select Edit.
In the Edit Object Store Bucket dialog, modify the details, and select Update.
For active object store buckets connected to filesystems, the system tracks this activity and provides details about each activity on the Bucket Operations page.
The details include the operation type (download or upload), start time, execution time, previous attempts results, cURL errors, and more.
Procedure
From the menu, select Manage > Object Stores.
Select the three dots on the right of the object store bucket you want to show its recent operation, and select Show Recent Operations.
The recent operations page for the selected object store bucket appears. To focus on specific operations, you can sort the columns and use the filters that appear on the top of the columns.
You can delete an object store bucket if it is no longer required. The data in the object store remains intact.
Procedure
From the menu, select Manage > Object Stores.
Select the three dots on the right of the object store bucket you want to delete, and select Remove.
To confirm the object store bucket deletion, select Yes.
This page presents working with the WEKA Self-Service Portal when installing the WEKA system in AWS.
The WEKA Self-Service Portal is a planning tool for WEKA clusters to meet storage requirements when installing in AWS.
It is possible to start by just entering the capacity required, configuring advanced parameters such as required performance and even provision of a multi-AZ cluster for added reliability.
Each configuration can be immediately deployed as a CloudFormation stack by redirecting to the AWS console.
Once the cluster is deployed:
Refer to Getting Started with WEKA section. See or .
Refer to to quickly get familiar with creating, mounting, and writing to a WEKA filesystem.
The Self-Service Portal is available at . Its main screen is divided into two panes: the left pane, which is used for input requirements, and the right pane which displays possible configurations for the defined requirements.
As shown in the screen above, configuration options include the total capacity, the desired deployment model, and additional performance requirements. For more information about deployment types, refer to .
Once the configuration to be deployed has been found, click the Deploy to AWS button next to the desired configuration. At this point, it is possible to specify additional options for the deployment, such as adding client instances or selecting the WEKA system version to be deployed.
Once everything is ready to deploy the cluster, click the Deploy to AWS button. This will display the AWS CloudFormation screen with a template containing the configured cluster.
After clicking the Deploy to AWS button, the AWS CloudFormation screen is displayed, requiring the creation of stacks.
In the Create Stack screen, define the parameters which are specific to your AWS account.
Define the parameters for WEKA cluster configuration:
Define the following optional parameters if tiering to S3 is desired:
Once all required parameters have been filled, make sure to check the "I acknowledge that AWS CloudFormation might create IAM resources” checkbox at the bottom and click the Create Stack button:
When deploying in a private network, without a NAT (using a WEKA proxy or a custom proxy), some resources should be created (once) per VPC (such as the WEKA VPC endpoint, S3 gateway, or EC2 endpoint).
Copy the link under the Network Topology parameter, and run it in a new browser tab. The AWS CloudFormation screen is displayed, requiring the creation of the prerequisites stack.
In the Create Stack screen, define the parameters specific to your AWS account.
The cluster deployment process takes about 10 minutes. During this time, the following occurs:
The AWS resources required for the cluster are provisioned.
The WEKA system is installed on each instance provisioned for the cluster.
A cluster is created using all backend instances.
All client instances are created.
Once the deployment is complete, the CloudFormation stack status will be updated to CREATE_COMPLETE. At this point, it is possible to access the WEKA system cluster GUI by going to the Outputs tab of the CloudFormation stack and clicking the GUI link (or by http://<backend server name or IP address>:14000).
Related topics
This is a quick reference guide using the CLI to perform the first IO in the WEKA filesystem.
Once the system is installed and you are familiar with the CLI and GUI, you can connect to one of the servers and try it out.
To perform a sanity check that the WEKA cluster is configured and IOs can be performed on it, do the following procedures:
.
To validate that the WEKA cluster and IT environment are best configured to benefit from the WEKA filesystem, do the following procedure:
.
A filesystem must reside in a filesystem group. Create a filesystem group:
2. Create a filesystem within that group:
For more information about filesystems and filesystem groups, see .
To mount a filesystem, create a mount point and call the mount command:
2. Check that the filesystem is mounted:
For more information about mounting filesystems and mount options, refer to .
Write data to the filesystem:
This has completed the sanity check that the WEKA cluster is configured and IOs can be performed on it.
To ensure that the WEKA cluster and the IT environment are well configured, more complex IO patterns and benchmark tests should be conducted using the FIO utility.
Although results can vary using different servers and networking, it is not expected to be very different than what many other customers and we achieved. A properly configured WEKA cluster and IT environment should yield similar results described in the WEKA performance tests section.
Related topic
This page describes how to view and manage filesystems using the GUI.
Using the GUI, you can perform the following actions:
[root@ip-172-31-23-177 weka]# pwd
/mnt/weka
[root@ip-172-31-23-177 weka]# ls -la
total 0
drwxrwxr-x 1 root root 0 Sep 19 04:56 .
drwxr-xr-x 4 root root 33 Sep 20 06:48 ..
drwx------ 1 user1 user1 0 Sep 20 09:26 user1
[root@ip-172-31-23-177 weka]# cd .snapshots
[root@ip-172-31-23-177 .snapshots]# ls -l
total 0
drwxrwxr-x 1 root root 0 Sep 21 02:44 @GMT-2023.09.21-02.44.38
[root@ip-172-31-23-177 .snapshots]#weka agent
|--install-agent
|--uninstall
|--autocomplete
|--install
|--uninstall
|--exportweka alerts
|--types
|--mute
|--unmute
|--describeUnique name for filesystem snapshot
Must be a valid name
Yes
access-point
String
Name of the newly-created directory for filesystem-level snapshots, which serves as the access point for the snapshots
Must be a valid name
No
Controlled by weka fs snapshot access-point-naming-convention update <date/name>. By default it is <date> format: @GMT_%Y.%m.%d-%H.%M.%S which is compatible with windows previous versions format for SMB.
source-snap
String
Must be an existing snapshot
Must be a valid name
No
The snapshot name of the specified filesystem.
is-writable
Boolean
Sets the created snapshot to be writable
No
False
Unique name for filesystem snapshot
Must be a valid name
Yes
preserved-overwritten-snapshot-name
A new name for the overwritten snapshot to preserve, thus allowing the IO operations continuity to the filesystem. If not specified, the original snapshot or active filesystem is overwritten, and IO operations to an existing filesystem might fail.
preserved-overwritten-snapshot-access-point
A directory that serves as the access point for the preserved overwritten snapshot.
If the preserved-overwritten-snapshot-name parameter is specified, but the preserved-overwritten-snapshot-access-pointparameter is not, it is created automatically based on the snapshot name.














-o|--output
Specifies the columns to include in the output.
-s|--sort
Specifies the order to sort the output. May include a '+' or '-' before the column name to sort by ascending or descending order.
-F| --filter
Specifies the filter values for a member (without forcing it to be in the output).
--no-header
Indicates that the column header should not be shown when printing the output.
-C|--CONNECT-TIMEOUT
Modifies the default timeout used for connecting to the system via the JRPC protocol.
-T|--TIMEOUT
Modifies the default timeout for which the commands wait for a response before giving up.
-J|--json
Prints the raw JSON value returned by the cluster.
-H|--hostname
Directs the CLI to communicate with the cluster through the specified hostname or IP.
--raw-units
Sets the units such as capacity and bytes to be printed in their raw format, as returned by the cluster.
--UTC
Sets the timestamps to be printed in UTC timezone, instead of the local time of the server running the CLI command.
-f|--format
Specifies the format to output the result (view, csv, markdown, or JSON).
$ weka -h
Usage:
weka [--help] [--build] [--version] [--legal]
Description:
The base command for all weka-related CLIs
Subcommands:
agent Commands that control the weka agent (outside the weka containers)
alerts List alerts in the Weka cluster
cloud Cloud commands. List the cluster's cloud status, if no subcommand supplied.
cluster Commands that manage the cluster
diags Diagnostics commands to help understand the status of the cluster and its environment
events List all events that conform to the filter criteria
fs List filesystems defined in this Weka cluster
local Commands that control weka and its containers on the local machine
mount Mounts a wekafs filesystem. This is the helper utility installed at /sbin/mount.wekafs.
nfs Commands that manage client-groups, permissions and interface-groups
org List organizations defined in the Weka cluster
security Security commands.
smb Commands that manage Weka's SMB container
stats List all statistics that conform to the filter criteria
status Get an overall status of the Weka cluster
umount Unmounts wekafs filesystems. This is the helper utility installed at /sbin/umount.wekafs.
user List users defined in the Weka cluster
version When run without arguments, lists the versions available on this machine. Subcommands allow for
downloading of versions, setting the current version and other actions to manage versions.
s3 Commands that manage Weka's S3 container
Options:
--agent Start the agent service
-h, --help Show help message
--build Prints the CLI build number and exits
-v, --version Prints the CLI version and exits
--legal Prints software license information and exits
$ weka fs
| FileSystem | Name | Group | SSD Bu | Total | Is re | Is creat | Is remov
| ID | | | dget | Budget | ady | ing | ing
+------------+---------+---------+--------+--------+-------+----------+----------
| FSId: 0 | default | default | 57 GiB | 57 GiB | True | False | False$ weka fs -h
Usage:
weka fs [--name name]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--capacities]
[--force-fresh]
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
Description:
List filesystems defined in this Weka cluster
Subcommands:
create Create a filesystem
download Download a filesystem from object store
update Update a filesystem
delete Delete a filesystem
restore Restore filesystem content from a snapshot
quota Commands used to control directory quotas
group List filesystem groups
snapshot List snapshots
tier Show object store connectivity for each node in the cluster
reserve Thin provisioning reserve for organizations
Options:
--name Filesystem name
-H, --HOST Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w,
infinite/unlimited)
-T, --TIMEOUT Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w,
infinite/unlimited)
-f, --format Specify in what format to output the result. Available options are:
view|csv|markdown|json|oldview (format: 'view', 'csv', 'markdown', 'json' or 'oldview')
-o, --output Specify which columns to output. May include any of the following:
uid,id,name,group,usedSSD,usedSSDD,usedSSDM,freeSSD,availableSSDM,availableSSD,usedTotal,usedTotalD,freeTotal,availableTotal,maxFiles,status,encrypted,stores,auth,thinProvisioned,thinProvisioningMinSSDBugdet,thinProvisioningMaxSSDBugdet,usedSSDWD,usedSSDRD
-s, --sort Specify which column(s) to take into account when sorting the output. May include a '+' or
'-' before the column name to sort in ascending or descending order respectively. Usage:
[+|-]column1[,[+|-]column2[,..]]
-F, --filter Specify what values to filter by in a specific column. Usage:
column1=val1[,column2=val2[,..]]
--capacities Display all capacity columns
--force-fresh Refresh the capacities to make sure they are most updated
-h, --help Show help message
-R, --raw-units Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in
human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC Print times in UTC. When not set, times are converted to the local time of this host.
--no-header Don't show column headers when printing the output
-v, --verbose Show all columns in output
$ weka status
WekaIO v4.2.9 (CLI build 4.2.9)
cluster: DataSphere (554d62b9-ab40-4f59-bee6-ccc326bae2df)
status: OK (18 backend containers UP, 12 drives UP)
protection: 3+2 (Fully protected)
hot spare: 1 failure domains (2.45 TiB)
drive storage: 12.27 TiB total
cloud: connected
license: OK, valid thru 2024-10-20T06:23:01Z
io status: STARTED 1 hour ago (18 io-nodes UP, 162 Buckets UP)
link layer: Ethernet
clients: 0 connected
reads: 0 B/s (0 IO/s)
writes: 512 B/s (60 IO/s)
operations: 9 ops/s
alerts: none$ weka status
WekaIO v4.2.9 (CLI build 4.2.9)
cluster: WekaProd (b231e060-c5c1-421d-a68d-1dfa94ff149b)
status: DEGRADED (7 backends UP, 42 drives UP)
protection: 6+2
hot spare: 1 failure domains (1.23 TiB)
drive storage: 82.94 TiB total
cloud: connected
license: OK, valid thru 2024-5-20T06:20:01Z
io status: STARTED 2 hours (8 io-nodes UP, 80 Buckets UP)
Rebuild in progress (3%)
link layer: Ethernet
clients: 0 connected
reads: 0 B/s (0 IO/s)
writes: 0 B/s (0 IO/s)
operations: 0 ops/s
alerts: none$ weka status
WekaIO v4.2.9 (CLI build 4.2.9)
cluster: DataSphere (554d62b9-ab40-4f59-bee6-ccc326bae2df)
status: OK (18 backend containers UP, 12 drives UP)
protection: 3+2 (Fully protected)
hot spare: 1 failure domains (2.45 TiB)
drive storage: 12.27 TiB total, 2.73 TiB unprovisioned
cloud: connected
license: OK, valid thru 2024-10-20T06:23:01Z
io status: STARTED 1 hour ago (18 io-nodes UP, 162 Buckets UP)
link layer: Ethernet
clients: 0 connected
reads: 0 B/s (0 IO/s)
writes: 0 B/s (0 IO/s)
operations: 0 ops/s
alerts: 2 active alerts, use `weka alerts` to list them
$ weka status
WekaIO v4.2.9 (CLI build 4.2.9)
cluster: DataSphere (554d62b9-ab40-4f59-bee6-ccc326bae2df)
status: OK (15/18 backend containers UP, 10/12 drives UP)
protection: 3+2 (Fully protected)
hot spare: 1 failure domains (2.45 TiB)
drive storage: 12.27 TiB total, 2.45 TiB unavailable, 2.73 TiB unprovisioned
cloud: connected
license: OK, valid thru 2024-10-20T06:23:01Z
io status: STARTED 1 hour ago (15/18 io-nodes UP, 162 Buckets UP)
link layer: Ethernet
clients: 0 connected
reads: 0 B/s (0 IO/s)
writes: 0 B/s (0 IO/s)
operations: 0 ops/s
alerts: 10 active alerts, use `weka alerts` to list themweka cloud
|--status
|--enable
|--disable
|--proxy
|--update
|--upload-rate
|--status
|--enable
|--disable
|--proxy
|--upload-rateweka cluster
|--create
|--update
|--process
|--bucket
|--failure-domain
|--hot-spare
|--start-io
|--stop-io
|--drive
|--scan
|--activate
|--deactivate
|--add
|--remove
|--mount-defaults
|--set
|--show
|--reset
|--servers
|--list
|--show
|--container
|--info-hw
|--failure-domain
|--dedicate
|--bandwidth
|--cores
|--memory
|--auto-remove-timeout
|--management-ips
|--resources
|--restore
|--apply
|--activate
|--deactivate
|--clear-failure
|--add
|--remove
|--net
|--add
|--remove
|--default-net
|--license
|--task
|--pause
|--resume
|--abort
|--limits
|--client-target-versionweka diags
|--collect
|--list
|--rm
|--uploadweka events
|--list-local
|--list-types
|--trigger-eventweka fs
|--create
|--download
|--update
|--delete
|--restore
|--quota
|--set
|--set-default
|--unset
|--unset-default
|--list
|--list-default
|--group
|--create
|--update
|--delete
|--snapshot
|--create
|--copy
|--update
|--access-point-naming-convention
|--status
|--update
|--upload
|--download
|--delete
|--tier
|--location
|--fetch
|--release
|--capacity
|--s3
|--add
|--update
|--delete
|--attach
|--detach
|--snapshot
|--list
|--ops
|--obs
|--update
|--reserve
|--status
|--set
|--unsetweka local
|--install-agent
|--diags
|--events
|--ps
|--rm
|--start
|--stop
|--restart
|--status
|--enable
|--disable
|--monitoring
|--run
|--reset-data
|--resources
|--import
|--export
|--restore
|--apply
|--cores
|--base-port
|--memory
|--dedicate
|--bandwidth
|--management-ips
|--join-ips
|--failure-domain
|--net
|--add
|--remove
|--setup
|--weka
|--container
|--upgradeweka nfs
|--rules
|--add
|--delete
|--client-group
|--add
|--delete
|--permission
|--add
|--update
|--delete
|--interface-group
|--add
|--update
|--delete
|--ip-range
|--port
|--debug-level
|--show
|--set
|--global-config
|--set
|--showweka org
|--create
|--rename
|--set-quota
|--deleteweka security
|--kms
|--set
|--unset
|--rewrap
|--tls
|--status
|--download
|--set
|--unset
|--lockout-config
|--set
|--reset
|--show
|--login-banner
|--set
|--reset
|--enable
|--disable
|--show
|--ca-cert
|--set
|--status
|--download
|--unsetweka smb
|--cluster
|--containers
|--add
|--remove
|--wait
|--update
|--create
|--debug
|--destroy
|--trusted-domains
|--add
|--remove
|--status
|--host-access
|--list
|--reset
|--add
|--remove
|--share
|--update
|--lists
|--show
|--reset
|--add
|--remove
|--add
|--remove
|--host-access
|--list
|--reset
|--add
|--remove
|--domain
|--join
|--cluster
|--share
|--domain
|--leave
|--cluster
|--share
|--domainweka stats
|--realtime
|--list-types
|--retention
|--set
|--status
|--restore-defaultweka status
|--rebuildweka user
|--login
|--logout
|--whoami
|--passwd
|--change-role
|--update
|--add
|--delete
|--revoke-tokens
|--generate-token
|--ldap
|--setup
|--setup-ad
|--update
|--enable
|--disable
|--resetweka version
|--get
|--set
|--unset
|--current
|--rm
|--prepareweka s3
|--cluster
|--create
|--update
|--destroy
|--status
|--audit-webhook
|--containers
|--bucket
|--create
|--list
|--destroy
|--lifecycle-rule
|--policy
|--quota
|--policy
|--list
|--show
|--add
|--remove
|--attach
|--detach
|--service-account
|--list
|--show
|--add
|--remove
|--sts
|--assume-role
|--log-level
|--get# to create a new filesystem group
$ weka fs group create my_fs_group
FSGroupId: 0
# to view existing filesystem groups details in the WEKA system
$weka fs group
FileSystem Group ID | Name | target-ssd-retention | start-demote
--------------------+-------------+----------------------+-------------
FSGroupId: 0 | my_fs_group | 1d 0:00:00h | 0:15:00h# to create a new filesystem
$ weka fs create new_fs my_fs_group 1TiB
FSId: 0
# to view existing filesystems details in the WEKA system
$ weka fs
Filesystem ID | Filesystem Name | Group | Used SSD (Data) | Used SSD (Meta) | Used SSD | Free SSD | Available SSD (Meta) | Available SSD | Used Total (Data) | Used Total | Free Total | Available Total | Max Files | Status | Encrypted | Object Storages | Auth Required
--------------+-----------------+-------------+-----------------+-----------------+----------+----------+----------------------+---------------+-------------------+------------+------------+-----------------+-----------+--------+-----------+-----------------+--------------
0 | new_fs | my_fs_group | 0 B | 4.09 KB | 4.09 KB | 1.09 TB | 274.87 GB | 1.09 TB | 0 B | 4.09 KB | 1.09 TB | 1.09 TB | 22107463 | READY | False | | False# to reduce the size of the default filesystem
$ weka fs update default --total-capacity 1GiB
# to create a new filesystem in the default group
$ weka fs create new_fs default 1GiB
# to view existing filesystems details in the WEKA system
$ weka fs
Filesystem ID | Filesystem Name | Group | Used SSD (Data) | Used SSD (Meta) | Used SSD | Free SSD | Available SSD (Meta) | Available SSD | Used Total (Data) | Used Total | Free Total | Available Total | Max Files | Status | Encrypted | Object Storages | Auth Required
--------------+-----------------+---------+-----------------+-----------------+----------+----------+----------------------+---------------+-------------------+------------+------------+-----------------+-----------+--------+-----------+-----------------+--------------
0 | default | default | 0 B | 4.09 KB | 4.09 KB | 1.07 GB | 268.43 MB | 1.07 GB | 0 B | 4.09 KB | 1.07 GB | 1.07 GB | 21589 | READY | False | | False
1 | new_fs | default | 0 B | 4.09 KB | 4.09 KB | 1.09 TB | 274.87 GB | 1.09 TB | 0 B | 4.09 KB | 1.09 TB | 1.09 TB | 22107463 | READY | False | | False$ sudo mkdir -p /mnt/weka
$ sudo mount -t wekafs new_fs /mnt/weka
# using the mount command
$ mount | grep new_fs
new_fs on /mnt/weka type wekafs (rw,relatime,writecache,inode_bits=64,dentry_max_age_positive=1000,dentry_max_age_negative=0)# to perform random writes
$ sudo dd if=/dev/urandom of=/mnt/weka/my_first_data bs=4096 count=10000
10000+0 records in
10000+0 records out
40960000 bytes (41 MB) copied, 4.02885 s, 10.2 MB/s
# to see the new file creted
$ ll /mnt/weka
total 40000
-rw-r--r-- 1 root root 40960000 Oct 30 11:58 my_first_data
# to check the WekaFS filesystems via the CLI shows the used SSD capacity:
$ weka fs
Filesystem ID | Filesystem Name | Group | Used SSD (Data) | Used SSD (Meta) | Used SSD | Free SSD | Available SSD (Meta) | Available SSD | Used Total (Data) | Used Total | Free Total | Available Total | Max Files | Status | Encrypted | Object Storages | Auth Required
--------------+-----------------+---------+-----------------+-----------------+----------+----------+----------------------+---------------+-------------------+------------+------------+-----------------+-----------+--------+-----------+-----------------+--------------
0 | default | default | 40.95 MB | 180.22 KB | 41.14 MB | 1.03 GB | 268.43 MB | 1.07 GB | 40.95 MB | 41.14 MB | 1.03 GB | 1.07 GB | 21589 | READY | False | | FalseType: Select the type of object store: AWS, AZURE, or OTHER (for GCP and others).
Buckets Default Parameters: Select one of the following tabs according to the object store type you choose.
Protocol and Port: Select the protocol to use when connecting to the bucket.
Bucket: Set the name of the bucket to store and access data.
Region: Set the region assigned to work with.
For AWS S3 bucket creation for WEKA cluster on EC2: If the WEKA EC2 instances have the required permissions granted by the IAM role, then it is not required to provide the Access Key and Secret Key. Otherwise, set the Access Key and Secret Key of the user granted with read/write access to the bucket.
For AWS S3 bucket creation for WEKA cluster not on EC2 using STS:
Select Enable AssumeRole API.
Role ARN: Set the Amazon Resource Name (ARN) to assume. The ARN must have the equivalent permissions defined in the IAM role for S3 access. See .
When creating the object store bucket in AWS, to use the storage classes: S3 Intelligent-Tiering, S3 Standard-IA, S3 One Zone-IA, and S3 Glacier Instant Retrieval, do the following:
Create the bucket in S3 Standard.
Create an AWS lifecycle policy to transition objects to these storage classes.
Make the relevant changes and click Update to update the object store bucket.
Set the following:
Protocol and Port: Select the protocol and port to use when connecting to the bucket.
Hostname: Set the DNS name (or IP address) of the bucket entry point.
Bucket: Set the name of the bucket to store and access data.
Auth Method: Select the authentication method to connect to the bucket.
Region: Set the region assigned to work with (usually you can leave it empty).
Access Key and Secret Key: If the service account has the required permissions granted by the IAM role, then it is not required to provide the Access Key and Secret Key. If the WEKA cluster is not running on GCP instances then the Access Key and Secret Key are required.
Set the following:
Protocol and Port: Select the protocol and port to use when connecting to the bucket.
Hostname: Set the DNS name (or IP address) of the bucket entry point.
Bucket: Set the name of the bucket to store and access data.
Auth Method: Select the authentication method to connect to the bucket.
Access Key and Secret Key: Set the the Access Key and Secret Key of the user granted with read/write access to the bucket.
Max concurrent Downloads: Maximum number of downloads concurrently performed on this object store in a single IO node.
Max concurrent Uploads: Maximum number of uploads concurrently performed on this object store in a single IO node.
Max concurrent Removals: Maximum number of removals concurrently performed on this object store in a single IO node.
Enable Upload Tags: Whether to enable object-tagging or not.








Only choose to create an S3 Gateway if non already exist for the VPC
Ec2 Endpoint
Only choose to create an EC2 Endpoint if non already exist for the VPC
A filesystem is created using all the available capacity and is mounted on all client instances. This shared filesystem is mounted on /mnt/weka in each cluster instance.
Parameter
Description
Stack name
The name that will be given to the stack in CloudFormation. This name has to be unique in the account.
SSH Key
The SSH-key for the ec2-user that will be used to connect to the instances.
VPC
The VPC in which the WEKA cluster will be deployed.
Subnet
The subnet in which the WEKA cluster will be deployed.
Parameter
Description
Network Topology
Network topology of the environment:
Public Subnet
Private subnet with NAT internet routing
Private subnet using Weka VPC endpoint - requires to create a CloudFormation stack (once per VPC) that creates the required resources.
Private subnet using custom proxy - requires to create a CloudFormation stack (once per VPC) that creates the required resources.
Related topic:
Custom Proxy
A custom proxy for private network internet access. Only relevant when Private network using custom proxy is selected as the Network Topology parameter.
WekaVolumeType
Volume type for the WEKA partition. GP3 is not yet available in all zones/regions (e.g., not available in local zones). In such a case, you must select the GP2 volume type. When available, using GP3 is preferred.
API Token
The API token for WEKA's distribution site. This can be obtained at https://get.weka.io/ui/account/api-tokens.
Admin Password
Sets the admin password after the cluster has been created. If no value is provided, the password is set to admin.
Parameter
Description
New S3 Bucket
The new S3 bucket name to create and attach to the filesystem created by the template. The bucket will not be deleted when the stack is destroyed.
Existing S3 Bucket
The existing S3 bucket name to attach to the filesystem created by the template. The bucket has to be in the same region where the cluster is deployed. If this parameter is provided, the New S3 Bucket parameter is ignored.
Tiering SSD Percent
Sets how much of the filesystem capacity (in percent) should reside on SSD. This parameter is applicable only if New S3 Bucket or Existing S3 Bucket parameters have been defined.
Parameter
Description
Stack name
The name that will be given to the stack in CloudFormation. This name has to be unique in the account.
VPC
The VPC in which the prerequisites resources (and WEKA cluster) will be deployed.
Subnet
The subnet in which the prerequisites resources (and WEKA cluster) will be deployed.
RouteTable
Route table ID of the chosen subnet for S3 gateway creation.
Network Topology
Network topology of the environment:
Private subnet using Weka VPC endpoint
Private subnet using custom proxy


S3 Gateway
Ethernet
The networking infrastructure dictates the choice between the two. If a WEKA cluster is connected to both infrastructures, it is possible to connect WEKA clients from both networks to the same cluster.
The WEKA system networking can be configured as performance-optimized or CPU-optimized. In performance-optimized networking, the CPU cores are dedicated to WEKA, and the networking uses DPDK. In CPU-optimized networking, the CPU cores are not dedicated to WEKA, and the networking uses DPDK (when supported by the NIC drivers) or in-kernel (UDP mode).
For performance-optimized networking, the WEKA system does not use standard kernel-based TCP/IP services but a proprietary infrastructure based on the following:
Use DPDK to map the network device in the user space and use the network device without any context switches and with zero-copy access. This bypassing of the kernel stack eliminates the consumption of kernel resources for networking operations. It applies to backends and clients and lets the WEKA system saturate network links (including, for example, 200 Gbps or 400 Gbps).
Implementing a proprietary WEKA protocol over UDP, i.e., the underlying network, may involve routing between subnets or any other networking infrastructure that supports UDP.
The use of DPDK delivers operations with extremely low latency and high throughput. Low latency is achieved by bypassing the kernel and sending and receiving packages directly from the NIC. High throughput is achieved because multiple cores in the same server can work in parallel without a common bottleneck.
Before proceeding, it is important to understand several key terms used in this section, namely DPDK and SR-IOV.
Data Plane Development Kit (DPDK) is a set of libraries and network drivers for highly efficient, low-latency packet processing. This is achieved through several techniques, such as kernel TCP/IP bypass, NUMA locality, multi-core processing, and device access via polling to eliminate the performance overhead of interrupt processing. In addition, DPDK ensures transmission reliability, handles retransmission, and controls congestion.
DPDK implementations are available from several sources. OS vendors like Red Hat and Ubuntu provide DPDK implementations through distribution channels. Mellanox OpenFabrics Enterprise Distribution for Linux (Mellanox OFED), a suite of libraries, tools, and drivers supporting Mellanox NICs, offers its own DPDK implementation.
The WEKA system relies on the DPDK implementation provided by Mellanox OFED on servers equipped with Mellanox NICs. For servers equipped with Intel NICs, DPDK support is through the Intel driver for the card.
Single Root I/O Virtualization (SR-IOV) extends the PCI Express (PCIe) specification that enables PCIe virtualization. It allows a PCIe device, such as a network adapter, to appear as multiple PCIe devices or functions.
There are two function categories:
Physical Function (PF): PF is a full-fledged PCIe function that can also be configured.
Virtual Function (VF): VF is a virtualized instance of the same PCIe device created by sending appropriate commands to the device PF.
Typically, there are many VFs, but only one PF per physical PCIe device. Once a new VF is created, it can be mapped by an object such as a virtual machine, container, or, in the WEKA system, by a 'compute' process.
To take advantage of SR-IOV technology, the software and hardware must be supported. The Linux kernel provides SR-IOV software support. The computer BIOS and the network adapter provide hardware support (by default, SR-IOV is disabled and must be enabled before installing WEKA).
For CPU-optimized networking, WEKA can yield CPU resources to other applications. That is useful when the extra CPU cores are needed for other purposes. However, the lack of CPU resources dedicated to the WEKA system comes with the expense of reduced overall performance.
For CPU-optimized networking, when mounting filesystems using stateless clients, it is possible to use DPDK networking without dedicating cores. This mode is recommended when available and supported by the NIC drivers. The DPDK networking uses RX interrupts instead of dedicating the cores in this mode.
WEKA can also use in-kernel processing and UDP as the transport protocol. This operation mode is commonly referred to as UDP mode.
UDP mode is compatible with older platforms that lack support for kernel offloading technologies (DPDK) or virtualization (SR-IOV) due to its use of in-kernel processing. This includes legacy hardware, such as the Mellanox CX3 family of NICs.
In a typical WEKA system configuration, the WEKA backend servers access the network function in two different methods:
Standard TCP/UDP network for management and control operations.
High-performance network for data-path traffic.
The high-performance network used to connect all the backend servers must be DPDK-based. This internal WEKA network also requires a separate IP address space. For details, see Network planning and Configure the networking.
The WEKA system maintains a separate ARP database for its IP addresses and virtual functions and does not use the kernel or operating system ARP services.
While WEKA backend servers must include DPDK and SR-IOV, WEKA clients in application servers have the flexibility to use either DPDK or UDP modes. DPDK mode is the preferred choice for newer, high-performing platforms that support it. UDP mode is available for clients without SR-IOV or DPDK support or when there is no need for low-latency and high-throughput I/O.
DPDK backends and clients using NICs supporting shared networking (single IP):
Require one IP address per client for both management and data plane.
SR-IOV enabled is not required.
DPDK backends and clients using NICs supporting non-shared IP:
IP address for management: One per NIC (configured before WEKA installation).
IP address for data plane: One per in each server (applied during cluster initialization).
(VFs):
UDP clients:
Use a single IP address for all purposes.
To support HA, the WEKA system must be configured with no single component representing a single point of failure. Multiple switches are required, and servers must have one leg on each.
HA for servers is achieved either through implementing two network interfaces on the same server or by LACP (ethernet only, mode 4). A non-LACP approach sets a redundancy that enables the WEKA software to use two interfaces for HA and bandwidth.
HA performs failover and failback for reliability and load balancing on both interfaces and is operational for Ethernet and InfiniBand. Not using LACP requires doubling the number of IPs on both the backend containers and the IO processes.
When working with HA networking, labeling the system to send data between servers through the same switch is helpful rather than using the ISL or other paths in the fabric. This can reduce the overall traffic in the network. To label the system for identifying the switch and network port, use the label parameter in the weka cluster container net add command.
GPUDirect Storage enables a direct data path between storage and GPU memory. GPUDirect Storage avoids extra copies through a bounce buffer in the CPU’s memory. It allows a direct memory access (DMA) engine near the NIC or storage to move data directly into or out of GPU memory without burdening the CPU or GPU.
When RDMA and GPUDirect storage are enabled, the WEKA system automatically uses the RDMA data path and GPUDirect Storage in supported environments. When the system identifies it can use RDMA, both in UDP and DPDK modes, it employs the use for workload it can benefit from RDMA (with regards to IO size: 32K+ for reads and 256K+ for writes).
By leveraging RDMA/GPUDirect Storage, you can achieve enhanced performance. A UDP client, which doesn't necessitate dedicating a core to the WEKA system, can yield significantly higher performance. Additionally, a DPDK client can receive an extra performance boost. Alternatively, in DPDK mode, you can assign fewer cores to the WEKA system while maintaining the same level of performance.
For the RDMA/GPUDirect Storage technology to take effect, the following requirements must be met:
All the cluster servers support RDMA networking.
For a client:
GPUDirect Storage: The IB interfaces added to the Nvidia GPUDirect configuration should support RDMA.
RDMA: All the Infiniband Host Channel Adapters (HCA) used by WEKA must support RDMA networking.
Encrypted filesystems: The framework is not used for encrypted filesystems and falls back to work without RDMA/GPUDirect for IOs to encrypted filesystems.
An HCA is considered to support RDMA networking if the following requirements are met:
For GPUDirect Storage only: InfiniBand network.
The NIC supports RDMA. See .
OFED 4.6-1.0.1.1 or higher.
RDMA/GPUDirect Storage technology is unsupported when working with a mixed IB and Ethernet networking cluster.
Running weka cluster processes indicates if the RDMA is used.
Example:
The filesystems are displayed on the Filesystems page. Each filesystem indicates the status, tiering, remote backup, encryption, SDD capacity, total capacity, filesystem group, and data reduction details.
Procedure
From the menu, select Manage > Filesystems.
When deploying a WEKA system on-premises, no filesystem is initially provided. You must create the filesystem and configure its properties, including capacity, group, tiering, thin provisioning, encryption, and required authentication during mounting.
When deploying a WEKA system on a cloud platform (AWS, Azure, or GCP) using Terraform or AWS CloudFormation, the WEKA system includes a default filesystem configured to maximum capacity. If your deployment necessitates additional filesystems with varied settings, reduce the provisioned capacity of the default filesystem and create a new filesystem with the desired properties to meet your specific requirements.
Before you begin
Verify that the system has free capacity.
Verify that a filesystem group is already set.
If tiering is required, verify that an object store bucket is set.
If encryption is required, verify that a KMS is configured.
Procedure
From the menu, select Manage > Filesystems.
Select the +Create button.
In the Create Filesystem dialog, set the following:
Name: Enter a descriptive label for the filesystem, limited to 32 characters and excluding slashes (/) or backslashes (\).
Group: Select the filesystem group that fits your filesystem.
Capacity: Enter the storage size to provision, or select Use All to provision all the free capacity.
Optional: Tiering. If tiering is required, an object store bucket is already defined, and data reduction is not enabled, select the toggle button and set the details of the object store bucket:
Object Store Bucket: Select a predefined object store bucket from the list.
Drive Capacity: Enter the capacity to provision on the SSD, or select Use All to use all free capacity.
Total Capacity: Enter the total capacity of the object store bucket, including the drive capacity.
When you set tiering, you can create the filesystem from an uploaded snapshot. See the related topics below.
Optional: Thin Provision. If Thin Provision is required, select the toggle button, and set the minimum (guaranteed) and the maximum capacity for the thin provisioned filesystem. The minimum capacity must be less or equal to the available SSD capacity. You can set any maximum capacity, but the available capacity depends on the actual free space of the SSD capacity. Thin provisioning is mandatory when enabling data reduction.
Optional: Data Reduction. Data reduction can be enabled only on thin provision, non-tiered, and unencrypted filesystems on a cluster with a valid data reduction license (you can verify the data reduction license in the cluster settings). For more details, see the related topics below. To enable the Data Reduction, select the toggle button.
Optional: If Encryption is required and your WEKA system is deployed with a KMS, select the toggle button.
Optional: Required Authentication.
When ON, user authentication is required when mounting to the filesystem. This option is only relevant to a filesystem created in the root organization.
Enabling authentication is not allowed for a filesystem hosting NFS client permissions or SMB shares.
To authenticate during mount, the user must run the weka user login command or use the auth_token_path parameter.
Select Save.
Related topics
You can modify the filesystem parameters according to your demand changes over time. The parameters you can modify include filesystem name, capacity, tiering, thin provisioning, and required authentication (but not encryption).
Procedure
From the menu, select Manage > Filesystems.
Select the three dots on the right of the filesystem you want to modify, and select Edit.
In the Edit Filesystem dialog, modify the parameters according to your requirements. (See the parameter descriptions in the Add a filesystem topic.)
Select Save.
You can delete a filesystem if its data is no longer required. Deleting a filesystem does not delete the data in the tiered object store bucket.
Procedure
From the menu, select Manage > Filesystems.
Select the three dots on the right of the filesystem you want to delete, and select Remove.
To confirm the filesystem deletion, enter the filesystem name and select Confirm.
Once the WEKA cluster is installed and configured, perform the following:
# weka cluster processes
PROCESS ID HOSTNAME CONTAINER IPS STATUS ROLES NETWORK CPU MEMORY UPTIME
0 weka146 default 10.0.1.146 UP MANAGEMENT UDP 16d 20:07:42h
1 weka146 default 10.0.1.146 UP FRONTEND DPDK / RDMA 1 1.47 GB 16d 23:29:00h
2 weka146 default 10.0.3.146 UP COMPUTE DPDK / RDMA 12 6.45 GB 16d 23:29:00h
3 weka146 default 10.0.1.146 UP COMPUTE DPDK / RDMA 2 6.45 GB 16d 23:29:00h
4 weka146 default 10.0.3.146 UP COMPUTE DPDK / RDMA 13 6.45 GB 16d 23:29:00h
5 weka146 default 10.0.1.146 UP COMPUTE DPDK / RDMA 3 6.45 GB 16d 22:28:58h
6 weka146 default 10.0.3.146 UP COMPUTE DPDK / RDMA 14 6.45 GB 16d 23:29:00h
7 weka146 default 10.0.3.146 UP DRIVES DPDK / RDMA 18 1.49 GB 16d 23:29:00h
8 weka146 default 10.0.1.146 UP DRIVES DPDK / RDMA 8 1.49 GB 16d 23:29:00h
9 weka146 default 10.0.3.146 UP DRIVES DPDK / RDMA 19 1.49 GB 16d 23:29:00h
10 weka146 default 10.0.1.146 UP DRIVES DPDK / RDMA 9 1.49 GB 16d 23:29:00h
11 weka146 default 10.0.3.146 UP DRIVES DPDK / RDMA 20 1.49 GB 16d 23:29:07h
12 weka147 default 10.0.1.147 UP MANAGEMENT UDP 16d 22:29:02h
13 weka147 default 10.0.1.147 UP FRONTEND DPDK / RDMA 1 1.47 GB 16d 23:29:00h
14 weka147 default 10.0.3.147 UP COMPUTE DPDK / RDMA 12 6.45 GB 16d 23:29:00h
15 weka147 default 10.0.1.147 UP COMPUTE DPDK / RDMA 2 6.45 GB 16d 23:29:00h
16 weka147 default 10.0.3.147 UP COMPUTE DPDK / RDMA 13 6.45 GB 16d 23:29:00h
17 weka147 default 10.0.1.147 UP COMPUTE DPDK / RDMA 3 6.45 GB 16d 23:29:00h
18 weka147 default 10.0.3.147 UP COMPUTE DPDK / RDMA 14 6.45 GB 16d 23:29:00h
19 weka147 default 10.0.3.147 UP DRIVES DPDK / RDMA 18 1.49 GB 16d 23:29:00h
20 weka147 default 10.0.1.147 UP DRIVES DPDK / RDMA 8 1.49 GB 16d 23:29:00h
21 weka147 default 10.0.3.147 UP DRIVES DPDK / RDMA 19 1.49 GB 16d 23:29:07h
22 weka147 default 10.0.1.147 UP DRIVES DPDK / RDMA 9 1.49 GB 16d 23:29:00h
23 weka147 default 10.0.3.147 UP DRIVES DPDK / RDMA 20 1.49 GB 16d 23:29:07h
. . .Session Duration: Set the duration of the temporary security credentials in seconds.
Possible values: 900 - 43200 (default 3600).
Access Key and Secret Key: Set the keys of the user granted with the AssumeRole permissions.















Enable event notifications to the cloud for support purposes using one of the following options:
Enable support through Weka Home
Enable support through a private instance of Weka Home
Command: weka cloud enable
This command enables cloud event notification (via Weka Home), which increases the ability of the Weka Support Team to resolve any issues that may occur.
To learn more about this and how to enable cloud event notification, see WEKA Home - The WEKA support cloud.
In closed environments, such as dark sites and private VPCs, it is possible to install Local Weka Home, which is a private instance of Weka Home.
Command: weka cloud enable --cloud-url=http://<weka-home-ip>:<weka-home-port>
This command enables the WEKA cluster to send event notifications to the Local Weka Home.
Command: weka cluster license set / payg
To run IOs against the cluster, a valid license must be set. Obtain a valid license, classic or PAYG, and set it to the Weka cluster. For details, see License overview.
Command: weka cluster start-io
To start the system IO and exit from the initialization state, use the following command line:
weka cluster start-io
Command: weka cluster container
Use this command to display the list of containers and their details.
Command: weka cluster container resources
Use this command to check the resources of each container in the cluster.
weka cluster container resources <container-id>
Command: weka cluster drive
Use this command to check all drives in the cluster.
Command: weka status
The weka status command displays the overall status of the Weka cluster.
For details, see Cluster status.
If the WEKA cluster is deployed in an environment with a proxy server, a WEKA client trying to mount or download the client installation from the WEKA cluster may be blocked by the proxy server. You can disable the proxy for specific URLs using the shell no_proxy environment variable.
Connect to one of the WEKA backend servers.
Open the /etc/wekaio/service.conf file.
In the [downloads_proxy] section, add to the no_proxy parameter a comma-separated list of IP addresses or qualified domain names of your WEKA clients and cluster backend servers. Do not use wildcards (*).
Restart the agent service.
Command: weka cluster default-net set
Instead of individually configuring IP addresses for each network device, WEKA supports dynamic IP address allocation. Users can define a range of IP addresses to create a dynamic pool, and these addresses can be automatically allocated on demand.
Use the following command to configure default data networking:
weka cluster default-net set --range <range> [--gateway=<gateway>] [--netmask-bits=<netmask-bits>]
Parameters
range*
A range of IP addresses reserved for dynamic allocation across the entire cluster..
Format: A.B.C.D-E
Example: 10.10.0.1-100
netmask-bits*
Number of bits in the netmask that define a network ID in CIDR notation.
gateway
The IP address assigned to the default routing gateway. It is imperative that the gateway resides within the same IP network as defined by the specified range and netmask-bits. This parameter is not applicable to InfiniBand (IB) or Layer 2 (L2) non-routable networks.
View current settings: To view the current default data networking settings, use the command:
weka cluster default-net
Remove default data networking: If a default data networking configuration was previously set up on a cluster and is no longer needed, you can remove it using the command:
weka cluster default-net reset
End of the installation and configuration for all workflow paths
$ weka cluster container
HOST ID HOSTNAME CONTAINER IPS STATUS RELEASE FAILURE DOMAIN CORES MEMORY LAST FAILURE UPTIME
0 av299-0 drives0 10.108.79.121 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-000 7 10.45 GB 1:08:30h
1 av299-1 drives0 10.108.115.194 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-001 7 10.45 GB 1:08:30h
2 av299-2 drives0 10.108.2.136 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-002 7 10.45 GB 1:08:29h
3 av299-3 drives0 10.108.165.185 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-003 7 10.45 GB 1:08:30h
4 av299-4 drives0 10.108.116.49 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-004 7 10.45 GB 1:08:29h
5 av299-5 drives0 10.108.7.63 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-005 7 10.45 GB 1:08:30h
6 av299-6 drives0 10.108.80.75 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-006 7 10.45 GB 1:08:29h
7 av299-7 drives0 10.108.173.56 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-007 7 10.45 GB 1:08:30h
8 av299-8 drives0 10.108.253.194 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-008 7 10.45 GB 1:08:29h
9 av299-9 drives0 10.108.220.115 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-009 7 10.45 GB 1:08:29h
10 av299-0 compute0 10.108.79.121 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-000 6 20.22 GB 1:08:08h
11 av299-1 compute0 10.108.115.194 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-001 6 20.22 GB 1:08:08h
12 av299-2 compute0 10.108.2.136 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-002 6 20.22 GB 1:08:09h
13 av299-3 compute0 10.108.165.185 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-003 6 20.22 GB 1:08:09h
14 av299-4 compute0 10.108.116.49 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-004 6 20.22 GB 1:08:09h
15 av299-5 compute0 10.108.7.63 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-005 6 20.22 GB 1:08:08h
16 av299-6 compute0 10.108.80.75 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-006 6 20.22 GB 1:08:09h
17 av299-7 compute0 10.108.173.56 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-007 6 20.22 GB 1:08:08h
18 av299-8 compute0 10.108.253.194 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-008 6 20.22 GB 1:08:09h
19 av299-9 compute0 10.108.220.115 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-009 6 20.22 GB 1:08:08h
20 av299-0 frontend0 10.108.79.121 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-000 1 1.47 GB 1:06:57h
21 av299-1 frontend0 10.108.115.194 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-001 1 1.47 GB 1:06:57h
22 av299-2 frontend0 10.108.2.136 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-002 1 1.47 GB 1:06:57h
23 av299-3 frontend0 10.108.165.185 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-003 1 1.47 GB 1:06:56h
24 av299-4 frontend0 10.108.116.49 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-004 1 1.47 GB 1:06:57h
25 av299-5 frontend0 10.108.7.63 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-005 1 1.47 GB 1:06:56h
26 av299-6 frontend0 10.108.80.75 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-006 1 1.47 GB 1:06:57h
27 av299-7 frontend0 10.108.173.56 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-007 1 1.47 GB 1:06:56h
28 av299-8 frontend0 10.108.253.194 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-008 1 1.47 GB 1:06:57h
29 av299-9 frontend0 10.108.220.115 UP 4.2.0.8076-9e87a37af8169f32fb3c81c73d6844a1 DOM-009 1 1.47 GB 1:06:56h$ weka cluster container resources 0
ROLES NODE ID CORE ID
MANAGEMENT 0 <auto>
DRIVES 1 12
DRIVES 2 14
DRIVES 3 2
DRIVES 4 20
DRIVES 5 6
DRIVES 6 8
DRIVES 7 22
NET DEVICE IDENTIFIER DEFAULT GATEWAY IPS NETMASK NETWORK LABEL
0000:00:0a.0 0000:00:0a.0 10.108.0.1 10.108.34.80 16
0000:00:0b.0 0000:00:0b.0 10.108.0.1 10.108.190.166 16
0000:00:0c.0 0000:00:0c.0 10.108.0.1 10.108.125.213 16
0000:00:0f.0 0000:00:0f.0 10.108.0.1 10.108.61.111 16
0000:00:10.0 0000:00:10.0 10.108.0.1 10.108.26.149 16
0000:00:11.0 0000:00:11.0 10.108.0.1 10.108.30.216 16
0000:00:12.0 0000:00:12.0 10.108.0.1 10.108.217.129 16
Allow Protocols false
Bandwidth <auto>
Base Port 14000
Dedicate Memory true
Disable NUMA Balancing true
Failure Domain DOM-000
Hardware Watchdog false
Management IPs 10.108.79.121
Mask Interrupts true
Memory <dedicated>
Mode BACKEND
Set CPU Governors PERFORMANCE$ weka cluster container resources 10
ROLES NODE ID CORE ID
MANAGEMENT 0 <auto>
COMPUTE 1 16
COMPUTE 2 4
COMPUTE 3 18
COMPUTE 4 26
COMPUTE 5 28
COMPUTE 6 10
NET DEVICE IDENTIFIER DEFAULT GATEWAY IPS NETMASK NETWORK LABEL
0000:00:04.0 0000:00:04.0 10.108.0.1 10.108.145.137 16
0000:00:05.0 0000:00:05.0 10.108.0.1 10.108.212.87 16
0000:00:06.0 0000:00:06.0 10.108.0.1 10.108.199.231 16
0000:00:07.0 0000:00:07.0 10.108.0.1 10.108.86.172 16
0000:00:08.0 0000:00:08.0 10.108.0.1 10.108.190.88 16
0000:00:09.0 0000:00:09.0 10.108.0.1 10.108.77.31 16
Allow Protocols false
Bandwidth <auto>
Base Port 14300
Dedicate Memory true
Disable NUMA Balancing true
Failure Domain DOM-000
Hardware Watchdog false
Management IPs 10.108.79.121
Mask Interrupts true
Memory 20224982280
Mode BACKEND
Set CPU Governors PERFORMANCE$ weka cluster container resources 20
ROLES NODE ID CORE ID
MANAGEMENT 0 <auto>
FRONTEND 1 24
NET DEVICE IDENTIFIER DEFAULT GATEWAY IPS NETMASK NETWORK LABEL
0000:00:13.0 0000:00:13.0 10.108.0.1 10.108.217.249 16
Allow Protocols true
Bandwidth <auto>
Base Port 14200
Dedicate Memory true
Disable NUMA Balancing true
Failure Domain DOM-000
Hardware Watchdog false
Management IPs 10.108.79.121
Mask Interrupts true
Memory <dedicated>
Mode BACKEND
Set CPU Governors PERFORMANCE$ weka cluster drive
DISK ID UUID HOSTNAME NODE ID SIZE STATUS LIFETIME % USED ATTACHMENT DRIVE STATUS
0 d3d000d4-a76b-405d-a226-c40dcd8d622c av299-4 87 399.99 GiB ACTIVE 0 OK OK
1 c68cf47a-f91d-499f-83c8-69aa06ed37d4 av299-7 143 399.99 GiB ACTIVE 0 OK OK
2 c97f83b5-b9e3-4ccd-bfb8-d78537fa8a6f av299-1 23 399.99 GiB ACTIVE 0 OK OK
3 908dadc5-740c-4e08-9cc2-290b4b311f81 av299-0 7 399.99 GiB ACTIVE 0 OK OK
.
.
.
68 1c4c4d54-6553-44b2-bc61-0f0e946919fb av299-4 84 399.99 GiB ACTIVE 0 OK OK
69 969d3521-9057-4db9-8304-157f50719683 av299-3 62 399.99 GiB ACTIVE 0 OK OK[downloads_proxy]
force_no_proxy=true
proxy=
no_proxy=<comma-separated list of IPs or domains>Set the number of VFs to match the cores you intend to dedicate to WEKA.
Note that some BIOS configurations may be necessary.
SR-IOV: Enabled in BIOS.
For GPUDirect Storage: install with --upstream-libs and --dpdk.

Detailed workflow for manually configuring the WEKA cluster using the resources generator in a multi-container backend architecture.
Perform this workflow using the resources generator only if you are not using the automated WMS, WSA, or WEKA Configurator.
The resources generator generates three resource files on each server in the /tmp directory: drives0.json, compute0.json, and frontend0.json. Then, you create the containers using these generated files of the cluster servers.
Download the resources generator from the GitHub repository to your local server: .
Example:
Copy the resources generator from your local server to all servers in the cluster.
Example for a cluster with 8 servers:
To enable execution, change the mode of the resources generator on all servers in the cluster.
Example for a cluster with 8 servers:
Remove the default container
Generate the resource files
Create drive containers
Create a cluster
Command: weka local stop default && weka local rm -f default
Stop and remove the auto-created default container created on each server.
Command: resources_generator.py
To generate the resource files for the drive, compute, and frontend processes, run the following command on each backend server:
./resources_generator.py --net <net-devices> [options]
The resources generator allocates the number of cores, memory, and other resources according to the values specified in the parameters.
The best practice for resources allocation is as follows:
1 drive core per NVMe device (SSD).
2-3 compute cores per drive core.
1-2 frontend cores if deploying a protocol container. If there is a spare core, it is used for a frontend container.
Minimum of 1 core for the OS.
For a server with 24 cores and 6 SSDs, allocate 6 drive cores and 12 compute cores, and optionally you can use 2 cores of the remaining cores for the frontend container. The OS uses the remaining 4 cores.
Run the following command line:
./resources_generator.py --net eth1 eth2 --drive-dedicated-cores 6 --compute-dedicated-cores 12 --frontend-dedicated-cores 2
For a server with 14 cores and 6 SSDs, allocate 6 drive cores and 6 compute cores, and optionally you can use 1 core of the remaining cores for the frontend container. The OS uses the remaining 1 core.
Run the following command line:
./resources_generator.py --net eth1 eth2 --drive-dedicated-cores 6 --compute-dedicated-cores 6 --frontend-dedicated-cores 1
Parameters
Command: weka local setup container
For each server in the cluster, create the drive containers using the resource generator output file drives0.json.
The drives JSON file includes all the required values for creating the drive containers. Only the path to the JSON resource file is required (before cluster creation, the optional parameter join-ips is not relevant).
Parameters
Command: weka cluster create
To create a cluster of the allocated containers, use the following command:
Parameters
Command: weka cluster drive add
To configure the SSD drives on each server in the cluster, or add multiple drive paths, use the following command:
Parameters
Command: weka local setup container
For each server in the cluster, create the compute containers using the resource generator output file compute0.json.
Parameters
Command: weka local setup container
For each server in the cluster, create the frontend containers using the resource generator output file frontend0.json.
Parameters
Command: weka cluster update --data-drives=<count> --parity-drives=<count>
Example: weka cluster update --data-drives=4 --parity-drives=2
Command: weka cluster hot-spare <count>
Example: weka cluster hot-spare 1
Command: weka cluster update --cluster-name=<cluster name>
This page describes the process for adding clients to an already-installed WEKA system cluster.
When launching a WEKA cluster, either through the Self-service portal or via a CloudFormation template, it is also possible to launch client instances. However, sometimes it may be required to add more clients after the cluster has been installed. To add more clients as separate instances, follow the instructions below.
When launching new clients, ensure the following concerning networking and root volume:
For best performance, it is recommended that the new clients will be in the same subnet as the backend instances. Alternatively, they can be in a routable subnet to the backend instances in the same AZ (note that cross-AZ traffic also incurs expensive network charges).
They must use the same security group as the backends they will connect to, or alternatively, use a security group that allows them to connect to the backend instances.
Enhanced networking is enabled as specified in .
When adding a client, it is required to provide permissions to several AWS APIs, as described in .
These permissions are automatically created in an instance profile as part of the CloudFormation stack. It is possible to use the same instance profile as one of the backend instances to ensure the same credentials are given to the new client.
The network interface permissions are required to create and attach a network interface to the new client. A separate NIC is required to allow the WEKA client to preallocate the network resource for the fastest performance.
If the client is not provided with these permissions, it can only provide ec2:* and create an additional NIC in the same security group and subnet described above when mounting a second cluster from a single client (see ).
The client's root volume must be at least 48 GiB in size and either GP2 or IO1 type.
The WEKA software is installed under /opt/weka. If it is not possible to change the size of the root volume, an additional EBS volume can be created, formatted, and mounted under /opt/weka. Make sure that the new volume is either GP2 or IO1 type.
To mount a filesystem in this manner, first install the WEKA agent from one of the backend instances and then mount the filesystem. For example:
For the first mount, this will install the WEKA software and automatically configure the client. For more information on mount and configuration options, see the section.
It is possible to configure the client OS to mount the filesystem at boot time automatically. For more information, see the or sections.
This is the same step as in the previous method of adding a client.
To download the WEKA software, go to and select the software version. After selecting the version, select the operating system to install and run the download command line as root on all the new client instances.
When the download is complete, untar the downloaded package and run the install.sh command in the package directory.
Example:
If you downloaded version 3.6.1, run cd weka-3.6.1 and then run ./install.sh.
Once the WEKA software is installed, the clients are ready to join the cluster. To add the clients, run the following command line on each of the client instances:
where <backend-ip> is the IP address or hostname of one of the backend instances.
On most shells the following would get the client instance ID and add it to the cluster:
If successful, running theaws-add-client command will display the following line:
It is now possible to mount the filesystems on the client instances.
Example:
Using the mkdir -p /mnt/weka && mount -t wekafs default /mnt/weka command will mount the default filesystem under /mnt/weka.
Detailed workflow for WEKA cluster installation in a multi-container backend architecture using the Weka Configurator.
The WEKA Configurator tool facilitates cluster configuration. It performs the following:
Scans your environment to detect the network, verifies various attributes such as hostnames, and discovers components such as gateway routers.
Selects the servers that can be included in the cluster and verifies that all servers run the same WEKA version.
This page describes how to install WEKA on AWS Outposts
is a fully managed service that extends AWS infrastructure, AWS services, APIs, and tools to virtually any data center, co-location space, or on-premises facility for a consistent hybrid experience. AWS Outposts is ideal for workloads that require low latency access to on-premises systems, local data processing, or local data storage.
A WEKA cluster deployment in AWS Outposts follows the guidelines specified in the Deployment types section.
To deploy a WEKA cluster in AWS Outposts, use a CloudFormation template, which can be obtained as specified in the CloudFormation template Generator section.
AWS Outposts do not currently support placement groups, so the placement group from the template should be removed.
This template can be customized. For further assistance, contact the Customer Success Team.
Configure the SSD drives
Create compute containers
Create frontend containers
Configure the number of data and parity drives
Configure the number of hot spares
Name the cluster
Specify the CPUs to allocate for the WEKA processes. Format: space-separated numbers.
drive-core-ids
Specify the CPUs to allocate for the drive processes. Format: space-separated numbers.
drive-dedicated-cores
Specify the number of cores to dedicate for the drive processes.
1 core per each detected drive
drives
Specify the drives to use.
This option overrides automatic detection. Format: space-separated strings.
All unmounted NVME devices
frontend-core-ids
Specify the CPUs to allocate for the frontend processes. Format: space-separated numbers.
-
frontend-dedicated-cores
Specify the number of cores to dedicate for the frontend processes.
1
max-cores-per-container
Override the default maximum number of cores per container for IO processes (19). If provided, the new value must be lower.
19
minimal-memory
Set each container's hugepages memory to 1.4 GiB * number of IO processes on the container.
net*
Specify the network devices to use. Format: space-separated strings.
no-rdma
Don't take RDMA support into account when computing memory requirements.
False
num-cores
Override the auto-deduction of the number of cores.
All available cores
path
Specify the path to write the resource files.
'.'
spare-cores
Specify the number of cores to leave for OS and non-WEKA processes.
1
spare-memory
Specify the memory to reserve for non-WEKA requirements.
Argument format: a value and unit without a space.
Examples: 10GiB, 1024B, 5TiB.
The maximum between 8 GiB and 2% of the total RAM
weka-hugepages-memory
Specify the memory to allocate for compute, frontend, and drive processes.
Argument format: a value and unit without a space.
Examples: 10GiB, 1024B, 5TiB.
The maximum available memory
Once the cluster creation is successfully completed, the cluster is in the initialization phase, and some commands can only run in this phase.
To configure high availability (HA), at least two cards must be defined for each container.
On successful completion of the formation of the cluster, every container receives a container-ID. To display the list of the containers and IDs, run weka cluster container.
In IB installations the --containers-ips parameter must specify the IP addresses of the IPoIB interfaces.
compute-core-ids
Specify the CPUs to allocate for the compute processes. Format: space-separated numbers
compute-dedicated-cores
Specify the number of cores to dedicate for the compute processes.
The maximum available cores
compute-memory
Specify the total memory to allocate for the compute processes.
Format: value and unit without a space.
Examples: 1024B, 10GiB, 5TiB.
The maximum available memory
resources-path*
A valid path to the resource file.
hostnames*
Hostnames or IP addresses. If port 14000 is not the default for the drives, you can specify hostnames:port or ips:port. Minimum cluster size: 6 Format: space-separated strings
host-ips
IP addresses of the management interfaces. Use a list of ip+ip addresses pairs of two cards for HA configuration. In case the cluster is connected to both IB and Ethernet, it is possible to set up to 4 management IPs for redundancy of both the IB and Ethernet networks using a list of ip+ip+ip+ip addresses.
The same number of values as in hostnames.
Format: comma-separated IP addresses.
IP of the first network device of the container
container-id*
The Identifier of the drive container to add the local SSD drives.
device-paths*
List of block devices that identify local SSDs.
It must be a valid Unix network device name.
Format: Space-separated strings.
Example, /dev/nvme0n1 /dev/nvme1n1
resources-path*
A valid path to the resource file.
join-ips
IP:port pairs for the management processes to join the cluster. In the absence of a specified port, the command defaults to using the standard WEKA port 14000. Set the values, only if you want to customize the port.
Format: comma-separated IP addresses.
Example: --join-ips 10.10.10.1,10.10.10.2,10.10.10.3:15000
resources-path*
A valid path to the resource file.
join-ips
IP:port pairs for the management processes to join the cluster. In the absence of a specified port, the command defaults to using the standard WEKA port 14000. Set the values, only if you want to customize the port.
Format: comma-separated IP addresses.
Example: --join-ips 10.10.10.1,10.10.10.2,10.10.10.3:15000
core-ids
wget https://raw.githubusercontent.com/weka/tools/master/install/resources_generator.py
for i in {0..7}; do scp resources_generator.py weka0-$i:/tmp/resources_generator.py; done
pdsh -R ssh -w "weka0-[0-7]" 'chmod +x /tmp/resources_generator.py'
weka local setup container --resources-path <resources-path>/drives0.jsonweka cluster create <hostnames> [--host-ips <ips | ip+ip+ip+ip>]weka cluster drive add <container-id> <device-paths>weka local setup container --join-ips <IP addresses> --resources-path <resources-path>/compute0.jsonweka local setup container --join-ips <IP addresses> --resources-path <resources-path>/frontend0.json# Agent Installation (one time)
curl http://Backend-1:14000/dist/v1/install | sh
# Creating a mount point (one time)
mkdir -p /mnt/weka
# Mounting a filesystem
mount -t wekafs Backend-1/my_fs /mnt/wekaweka local run -e WEKA_HOST=<backend-ip> aws-add-client <client-instance-id>
weka local run -e WEKA_HOST=<backend-ip> aws-add-client $(curl -s http://169.254.169.254/latest/meta-data/instance-id)
Client has joined the cluster
Generates a valid configuration file that you can apply to form a WEKA cluster from a group of servers.
Adhere to the following concepts:
STEM mode: STEM mode is the initial state before configuration. The term STEM comes from the concept of stem cells in biology, which are undifferentiated. In WEKA clusters, STEM mode carries the same connotation of being an undifferentiated state.
Reference host: The wekaconfig normally runs on one of the servers designated as part of the final cluster. The server that wekaconfig runs on is called the reference host. When wekaconfig runs, it expects to find a group of servers in STEM mode. If the reference host is not in STEM mode, an error message is issued, and the program terminates.
Same networks: It is assumed that all other servers forming the cluster are connected to the same networks as the reference host and have the same configuration (all servers have a homogeneous hardware configuration).
Homogeneous configuration: Two or more servers with the same core count, RAM size, number and size of drives, and network configurations are considered homogeneous.
It is best practice to create the WEKA cluster from a group of homogeneous servers (it is typically the case because the hardware is typically purchased all at the same time). wekaconfig checks if the servers are homogeneous; if they are not, it points out the discrepancies (such as varying numbers of drives, RAM, or cores).
wekaconfig allows the configuration of heterogeneous clusters. However, because most times, the servers are supposed to be homogeneous, it can be an error that they are not. For example, if one of the drives is defective (DOA) from the factory or a memory stick is defective. These hardware issues are uncommon and can be difficult to discover in large clusters.
Passwordless ssh connection: Enabling passwordless ssh between all the servers is very convenient and makes most tools work more smoothly. At a minimum, a regular user with passwordless sudo privileges and passwordless ssh is required for configuration. However, it is most convenient to have the root user has passwordless ssh, even if only temporarily during configuration.
Ensure you can ssh without a password by doing an ssh to each server.
Stripe width: A RAID-like concept refers to the total width of the data stripe for data protection mechanisms. Typically, the DATA and PARITY combined are the stripe width. In WEKA terms, the stripe width must be less than the total number of servers in the cluster. For example, in a 10-server cluster, the stripe width can be 9 (7 data + 2 parity) plus 1 spare.
System preparation is validated: Ensure the system preparation is validated using the wekachecker (on WSA installations this is already installed under /opt/tools/install). See .
The WEKA software is installed on all cluster servers: If not installed using the WSA, follow the instructions in the Install tab of get.weka.io. Once completed, the WEKA software is installed on all the allocated servers and runs in STEM mode.
Configure a WEKA cluster with the WEKA Configurator.
Apply the configuration (config.sh).
Download the WEKA’s tools repository to one of the servers by running the following:
git clone https://github.com/weka/tools
Connect to the server using ssh, change the directory to tools/install, and run ./wekaconfig.
The wekaconfig scans the environment, detects the servers, and determines if the group of servers is homogeneous. The following example shows the servers do not have a homogeneous number of cores.
wekaconfig detection resultsReview the detection results. If the configuration meets your requirements, press Enter. Select each of the following tabs to set the WEKA configuration.
The wekaconfig displays the data plane networks (DP Networks) detected previously. The list under Select DP Networks reflects the high-speed (100Gb+) networks used for the WEKA storage traffic.
Verify that the list of networks, speed, and number of detected hosts are correct.
If the values are not as expected, such as an incorrect number of servers, incorrect or missing networks, investigate it and check the messages. Typically, network configuration issues are the source of the problem.
Select the required networks to configure WEKA POSIX protocol to run on.
Use the arrow and Tab keys to move between the fields and sections, and the space-bar to select the value.
Note: The green labels have entry fields. The yellow labels have read-only fields.
Press Tab to move to the Hosts section.
wekaconfig pre-populated the hostnames of the servers that are on this network and running the same version of WEKA and are in STEM mode.
Use the arrow keys to move between the servers, and space bar to select or deselect specific servers. Press Tab to accept values and move to the next field: High Availability.
High Availability (HA) is used for networks with more than one network interface.
In this example, only one network is selected, so the HA default is No. When there are two or more networks selected, you can change the the HA option to suit your needs. Consult the WEKA Customer Success Team before changing this default value.
Press Tab to accept value and move to the next field: Multicontainer. The default is Yes and it is mandatory from WEKA version 4.1.
Press Tab to move to the lower-right. Use the arrow to move to Next. Then, press the space-bar.
This page shows the following sections:
Host Configuration Reference
Bias
Cores details
Move to the Cluster Name field and set a unique name for your WEKA cluster.
The stripe and other settings include:
Data Drives: The number of data members in the Stripe Width.
Parity Drives: The number of parity members.
Hot Spares: The number of Hot Spare members.
Once you have set the WEKA configuration, using the arrows, select Done and press Enter. The wekaconfig creates the config.sh file.
From the install directory, run ./config.sh.
The configuration takes a few minutes and possibly longer for large clusters. See some examples of the configuration process and WEKA status.

The WEKA Management Station (WMS) is an install kit similar to an OS install disk that simplifies the installation and configuration of the WEKA cluster in an on-premises environment by deploying the WEKA Software Appliance (WSA) package on bare metal servers. The WMS installs the WEKA OS, drivers, and WEKA software automatically and unattended.
The WMS is also used for installing the monitoring tools: Local WEKA Home (LWH), WEKAmon, and SnapTool (for details, see Deploy monitoring tools using the WEKA Management Station (WMS).
Using the WMS with WSA to install a WEKA cluster requires a physical server (or VM) that meets the following requirements:
Boot drives: One or two identical boot drives as an installation target.
A system with two identical boot drives has the OS installed on mirrored partitions (LVM).
A system with one drive has a simple partition.
Target servers must be Dell, HPE, Supermicro, or Lenovo. Other servers are not supported.
The RedFish interface must be installed, enabled, and licensed for all target servers.
The WMS must be able to connect over Ethernet to the following servers’ interfaces:
For cluster configurations exceeding 25 servers, it’s advisable to equip the WMS with an ETH interface of superior speed, such as 10/25/50 Gbps, during the installation phase. As an alternative, you could bond two or more 1 Gbps interfaces to increase the bandwidth. Once the installation phase is completed, a bandwidth of 1 Gbps is sufficient.
Before deploying the WMS, adhere to the following:
Obtain the WMS package. For details, see .
The root password is WekaService
The WEKA user password is weka.io123
Boot the server from the WMS image. The following are some options to do that:
Copy the WEKA Management Station ISO image to an appropriate location so the server’s BMC (Baseboard Management Controller) can mount it or be served through a PXE (Preboot Execution Environment).
Depending on the server manufacturer, consult the documentation for the server’s BMC (for example, iLO, iDRAC, and IPMI) for detailed instructions on mounting and booting from a bootable ISO image, such as:
A workstation or laptop sent to the BMC through the web browser.
An SMB share in a Windows server or a Samba server.
Once you boot the server, the WEKA Management Station installs the WEKA OS (Rocky Linux), drivers, and WEKA software automatically and unattended (no human interaction required).
Depending on network speed, this can take about 10-60 mins (or more) per server.
Once the WMS installation is complete and rebooted, configure the WMS.
Run the OS using one of the following options:
Run the OS through the BMC’s Console. See the specific manufacturer’s BMC documentation.
Run the OS through the Cockpit Web Interface on port 9090 of the OS management network.
If you don’t know the WMS hostname or IP address, go to the console and press the Return key a couple of times until it prompts the URL of the WMS OS Web Console (Cockpit) on port 9090.
Change the port from 9090 to 8051, which is the WMS Admin port.
Browse to the WMS Admin UI using the following URL:
http://<WMS-hostname-or-ip>:8501
Enter username and password (default: admin/admin), and select Login. The Landing Page appears.
Download the latest release of the WSA package from dashboard.
Copy the WSA package to /home/weka .
For example: scp <wsa.iso> weka@<wms-server>:
Go to the WMS Admin UI (landing page) and select Deploy a WEKA Custer.
The WSA setup page opens.
Open Step 1 - Choose source ISO, select the WSA package (ISO) you intend to deploy, and click Next.
In Step 2 - Load values from, select one of the following options:
Option 1: Enter environment data: Click Go directly to forms to enter data.
Option 2: Import CSV file: If you have the environment data in a CSV file, click Upload a CSV file to pre-populate data. Step 3 - CSV File Upload section opens.
CSV template example
You can prepare a CSV file with the columns as specified in the following example:
In Step 4 - Number of servers to deploy, enter a Server Count (default is 8), and click Next.
In the following steps, if you uploaded a CSV file, the data is pre-populated. You can review the data and if no editing is necessary, select Next.
In Step 5 - IPMI information, do the following:
In the IPMI First IP, enter the IPMI IP address of the first server. It requires a consecutive set of IP addresses for the servers (typical).
In the IPMI user and IPMI password, modify the login credentials for the IPMI, iLO, or iDRAC according to your choice.
In Step 6 - Operating System network information, do the following:
In the OS First IP, enter the IP address of the OS 1 Gbit management interface. It requires a consecutive set of IP addresses for the servers (typical).
In the remaining networking fields, fill in the networking details.
In Step 7 - Dataplane settings, do the following:
Set the number of interfaces in the Dataplane Interface Count slider.
In the remaining dataplane fields, fill in the details.
When the ISO preparation is completed, the output is displayed. Verify that no errors appear. Then, click Next.
In Step 10 - Start Installation, click Install OS on Servers. The WMS loads the WSA on the servers previously defined and starts the installation. The installation can take several minutes and displays output when complete. Verify that no errors appear.
The installation process takes about 30 minutes, depending on several factors, such as network speed. Verify that the server’s BMC completed the restart.
In Step 11 - Run OS and Dataplane Configuration Scripts, click Run post-install scripts. This action runs scripts to configure the servers with the specified dataplane IPs and perform additional tasks, such as populating
Alternative OS and dataplane configuration
These commands only need to be run if you did not follow step 11 above.
Connect to one of the cluster servers to run the post-install scripts. The tools are in the same location: /opt/tools/install on the WSA as they are on the WMS.
When prompted, enter the password WekaService
Change the directory to /opt/ansible-install by running the following command:
Run the post-install script:
Example:
Ensure the DNS is operational, or copy the /etc/hosts entries from one of the cluster servers to the WMS.



#!/bin/bash
usage() {
echo "Usage: $0 [--no-parallel]"
echo " Use --no-parallel to prevent parallel execution"
exit 1
}
para() {
TF=$1; shift
echo $*
$* &
#[ !$TF ] && { echo para waiting; wait; }
[ $TF == "FALSE" ] && { echo para waiting; wait; }
}
PARA="TRUE"
# parse args
if [ $# != 0 ]; then
if [ $# != 1 ]; then
usage
elif [ $1 == "--no-parallel" ]; then
PARA="FALSE"
else
echo "Error: unknown command line switch - $1"
usage
fi
fi
echo starting - PARA is $PARA
# ------------------ custom script below --------------
echo Stopping weka on weka63
para ${PARA} scp -p ./resources_generator.py weka63:/tmp/
para ${PARA} ssh weka63 "sudo weka local stop; sudo weka local rm -f default"
echo Stopping weka on weka64
para ${PARA} scp -p ./resources_generator.py weka64:/tmp/
para ${PARA} ssh weka64 "sudo weka local stop; sudo weka local rm -f default"
echo Stopping weka on weka65
para ${PARA} scp -p ./resources_generator.py weka65:/tmp/
para ${PARA} ssh weka65 "sudo weka local stop; sudo weka local rm -f default"
echo Stopping weka on weka66
para ${PARA} scp -p ./resources_generator.py weka66:/tmp/
para ${PARA} ssh weka66 "sudo weka local stop; sudo weka local rm -f default"
echo Stopping weka on weka67
para ${PARA} scp -p ./resources_generator.py weka67:/tmp/
para ${PARA} ssh weka67 "sudo weka local stop; sudo weka local rm -f default"
echo Stopping weka on weka68
para ${PARA} scp -p ./resources_generator.py weka68:/tmp/
para ${PARA} ssh weka68 "sudo weka local stop; sudo weka local rm -f default"
echo Stopping weka on weka69
para ${PARA} scp -p ./resources_generator.py weka69:/tmp/
para ${PARA} ssh weka69 "sudo weka local stop; sudo weka local rm -f default"
wait
echo Running Resources generator on host weka63
para ${PARA} ssh weka63 sudo /tmp/resources_generator.py -f --path /tmp --net ib0/10.1.1.63/16 --compute-dedicated-cores 15 --drive-dedicated-cores 6 --frontend-dedicated-cores 1
echo Running Resources generator on host weka64
para ${PARA} ssh weka64 sudo /tmp/resources_generator.py -f --path /tmp --net ib0/10.1.1.64/16 --compute-dedicated-cores 15 --drive-dedicated-cores 6 --frontend-dedicated-cores 1
echo Running Resources generator on host weka65
para ${PARA} ssh weka65 sudo /tmp/resources_generator.py -f --path /tmp --net ib0/10.1.1.65/16 --compute-dedicated-cores 15 --drive-dedicated-cores 6 --frontend-dedicated-cores 1
echo Running Resources generator on host weka66
para ${PARA} ssh weka66 sudo /tmp/resources_generator.py -f --path /tmp --net ib0/10.1.1.66/16 --compute-dedicated-cores 15 --drive-dedicated-cores 6 --frontend-dedicated-cores 1
echo Running Resources generator on host weka67
para ${PARA} ssh weka67 sudo /tmp/resources_generator.py -f --path /tmp --net ib0/10.1.1.67/16 --compute-dedicated-cores 15 --drive-dedicated-cores 6 --frontend-dedicated-cores 1
echo Running Resources generator on host weka68
para ${PARA} ssh weka68 sudo /tmp/resources_generator.py -f --path /tmp --net ib0/10.1.1.68/16 --compute-dedicated-cores 15 --drive-dedicated-cores 6 --frontend-dedicated-cores 1
echo Running Resources generator on host weka69
para ${PARA} ssh weka69 sudo /tmp/resources_generator.py -f --path /tmp --net ib0/10.1.1.69/16 --compute-dedicated-cores 15 --drive-dedicated-cores 6 --frontend-dedicated-cores 1
wait
echo Starting Drives container on server weka63
para ${PARA} ssh weka63 "sudo weka local setup container --name drives0 --resources-path /tmp/drives0.json"
echo Starting Drives container on server weka64
para ${PARA} ssh weka64 "sudo weka local setup container --name drives0 --resources-path /tmp/drives0.json"
echo Starting Drives container on server weka65
para ${PARA} ssh weka65 "sudo weka local setup container --name drives0 --resources-path /tmp/drives0.json"
echo Starting Drives container on server weka66
para ${PARA} ssh weka66 "sudo weka local setup container --name drives0 --resources-path /tmp/drives0.json"
echo Starting Drives container on server weka67
para ${PARA} ssh weka67 "sudo weka local setup container --name drives0 --resources-path /tmp/drives0.json"
echo Starting Drives container on server weka68
para ${PARA} ssh weka68 "sudo weka local setup container --name drives0 --resources-path /tmp/drives0.json"
echo Starting Drives container on server weka69
para ${PARA} ssh weka69 "sudo weka local setup container --name drives0 --resources-path /tmp/drives0.json"
wait
sudo weka cluster create weka63 weka64 weka65 weka66 weka67 weka68 weka69 --host-ips=10.1.1.63,10.1.1.64,10.1.1.65,10.1.1.66,10.1.1.67,10.1.1.68,10.1.1.69 -T infinite
echo Starting Compute container 0 on host weka63
para ${PARA} ssh weka63 sudo weka local setup container --name compute0 --resources-path /tmp/compute0.json --join-ips=10.1.1.63,10.1.1.64,10.1.1.65,10.1.1.66,10.1.1.67,10.1.1.68,10.1.1.69 --management-ips=10.1.1.63
echo Starting Compute container 0 on host weka64
para ${PARA} ssh weka64 sudo weka local setup container --name compute0 --resources-path /tmp/compute0.json --join-ips=10.1.1.63,10.1.1.64,10.1.1.65,10.1.1.66,10.1.1.67,10.1.1.68,10.1.1.69 --management-ips=10.1.1.64
echo Starting Compute container 0 on host weka65
para ${PARA} ssh weka65 sudo weka local setup container --name compute0 --resources-path /tmp/compute0.json --join-ips=10.1.1.63,10.1.1.64,10.1.1.65,10.1.1.66,10.1.1.67,10.1.1.68,10.1.1.69 --management-ips=10.1.1.65
echo Starting Compute container 0 on host weka66
para ${PARA} ssh weka66 sudo weka local setup container --name compute0 --resources-path /tmp/compute0.json --join-ips=10.1.1.63,10.1.1.64,10.1.1.65,10.1.1.66,10.1.1.67,10.1.1.68,10.1.1.69 --management-ips=10.1.1.66
echo Starting Compute container 0 on host weka67
para ${PARA} ssh weka67 sudo weka local setup container --name compute0 --resources-path /tmp/compute0.json --join-ips=10.1.1.63,10.1.1.64,10.1.1.65,10.1.1.66,10.1.1.67,10.1.1.68,10.1.1.69 --management-ips=10.1.1.67
echo Starting Compute container 0 on host weka68
para ${PARA} ssh weka68 sudo weka local setup container --name compute0 --resources-path /tmp/compute0.json --join-ips=10.1.1.63,10.1.1.64,10.1.1.65,10.1.1.66,10.1.1.67,10.1.1.68,10.1.1.69 --management-ips=10.1.1.68
echo Starting Compute container 0 on host weka69
para ${PARA} ssh weka69 sudo weka local setup container --name compute0 --resources-path /tmp/compute0.json --join-ips=10.1.1.63,10.1.1.64,10.1.1.65,10.1.1.66,10.1.1.67,10.1.1.68,10.1.1.69 --management-ips=10.1.1.69
wait
para ${PARA} sudo weka cluster drive add 0 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1 /dev/nvme5n1
para ${PARA} sudo weka cluster drive add 1 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1 /dev/nvme5n1
para ${PARA} sudo weka cluster drive add 2 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1 /dev/nvme5n1
para ${PARA} sudo weka cluster drive add 3 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1 /dev/nvme5n1
para ${PARA} sudo weka cluster drive add 4 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1 /dev/nvme5n1
para ${PARA} sudo weka cluster drive add 5 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1 /dev/nvme5n1
para ${PARA} sudo weka cluster drive add 6 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1 /dev/nvme5n1
wait
sudo weka cluster update --data-drives=4 --parity-drives=2
sudo weka cluster hot-spare 1
sudo weka cluster update --cluster-name=fred
echo Starting Front container on host weka63
para ${PARA} ssh weka63 sudo weka local setup container --name frontend0 --resources-path /tmp/frontend0.json --join-ips=10.1.1.63,10.1.1.64,10.1.1.65,10.1.1.66,10.1.1.67,10.1.1.68,10.1.1.69 --management-ips=10.1.1.63
echo Starting Front container on host weka64
para ${PARA} ssh weka64 sudo weka local setup container --name frontend0 --resources-path /tmp/frontend0.json --join-ips=10.1.1.63,10.1.1.64,10.1.1.65,10.1.1.66,10.1.1.67,10.1.1.68,10.1.1.69 --management-ips=10.1.1.64
echo Starting Front container on host weka65
para ${PARA} ssh weka65 sudo weka local setup container --name frontend0 --resources-path /tmp/frontend0.json --join-ips=10.1.1.63,10.1.1.64,10.1.1.65,10.1.1.66,10.1.1.67,10.1.1.68,10.1.1.69 --management-ips=10.1.1.65
echo Starting Front container on host weka66
para ${PARA} ssh weka66 sudo weka local setup container --name frontend0 --resources-path /tmp/frontend0.json --join-ips=10.1.1.63,10.1.1.64,10.1.1.65,10.1.1.66,10.1.1.67,10.1.1.68,10.1.1.69 --management-ips=10.1.1.66
echo Starting Front container on host weka67
para ${PARA} ssh weka67 sudo weka local setup container --name frontend0 --resources-path /tmp/frontend0.json --join-ips=10.1.1.63,10.1.1.64,10.1.1.65,10.1.1.66,10.1.1.67,10.1.1.68,10.1.1.69 --management-ips=10.1.1.67
echo Starting Front container on host weka68
para ${PARA} ssh weka68 sudo weka local setup container --name frontend0 --resources-path /tmp/frontend0.json --join-ips=10.1.1.63,10.1.1.64,10.1.1.65,10.1.1.66,10.1.1.67,10.1.1.68,10.1.1.69 --management-ips=10.1.1.68
echo Starting Front container on host weka69
para ${PARA} ssh weka69 sudo weka local setup container --name frontend0 --resources-path /tmp/frontend0.json --join-ips=10.1.1.63,10.1.1.64,10.1.1.65,10.1.1.66,10.1.1.67,10.1.1.68,10.1.1.69 --management-ips=10.1.1.69
wait
echo Configuration process complete


This section shows the reference host cores and drives configuration, and the total number of hosts (servers).
The Bias options determine the optimal CPU core and memory allocation scheme.
Enable Protocols: If you intend to use the cluster for NFS, SMB, or S3 protocols, select this option. This option reserves some CPU and memory for the protocols.
Protocols are Primary: If you intend to use the cluster primarily or heavily with NFS, SMB, or S3 protocols, select this option. It reserves more CPU and memory (then in the first option) for the protocols .
DRIVES over COMPUTE: In high-core-count configurations (48+ cores), the standard algorithm for determining optimal core allocations may reduce the drive:core ratio in favor of additional COMPUTE cores. This bias setting favors a DRIVE core allocation of 1:1 (if possible) over additional COMPUTE cores. For advice on core allocations, consult with the Customer Success Team if you are configuring high-core-count systems.
wekaconfig suggests a reasonable set of core allocations (FE/COMPUTE/DRIVES) depending on your selections. You may override these values as needed.
Cores for OS: The number of cores reserved for the OS (fixed at 2).
Cores for Protocols: The number of cores reserved for protocols. It depends on the selected Bias option.
Usable Weka Cores: The number of cores can be used for FE, COMPTE, and DRIVES processes.
Used Weka Cores: The number of cores selected for use as FE, COMPUTE, or DRIVES cores.
The Usable Weka Cores and Available Weka Cores read-only fields are updated as you make changes so you can ensure you are not exceeding the number of available cores as you change any values. This is an advanced feature, and core allocation must not be changed without consulting the Customer Success Team.
Reserved RAM per Host: Extra RAM in GB reserved on each host for various purposes, like supporting Protocols or Applications.
These settings are in terms of servers, not SSDs. WEKA stripes over the entire servers, not over individual drives. For more details, see Manually configure the WEKA cluster using the resources generator.
The following example shows a stripe width of 6 (4+2) on 7 servers, and one hot spare.








If not configuring LWH: SSD 141 GB (131 GiB).
If configuring LWH: See the SSD-backed storage requirements section in .
Boot type: UEFI boot.
Cores and RAM:
If not configuring LWH: minimum 4 cores and 16 GiB.
If configuring LWH, see the Server minimum CPU and RAM requirements section in .
Network interface: 1 Gbps.
OS management interface, typically 1 Gbps. It must be connected to a switch.
Base Management Controller (BMC), such as IPMI, iDRAC, or iLO interfaces. The BMC interface must be configured with an IP address.
All the servers' dataplane interfaces must be connected to the switches.
The bare metal servers must conform to the Prerequisites and compatibility.
The bare metal servers must have an OS management network interface for administering the servers.
The boot type must be set to UEFI boot.
/tmp. The primary log is /tmp/ks-pre.log.To get a command prompt from the Installation GUI, do one of the following:
On macOS, type ctrl+option+f2
On Windows, type ctrl+alt+f2.
An NFS share.
To use PXE boot, use the WEKA Management Station as any other Operating System ISO image and configure accordingly.
Burn the WMS image to a DVD and boot it from the physical DVD. However, most modern servers do not have DVD readers anymore.
A bootable USB drive should work (follow online directions for creating a bootable USB drive) but has not been tested yet.
Click Fill IPMI IPs to calculate the IP addresses for the number of servers specified in Step 4.
You can edit the IP addresses, Usernames, and Passwords as needed if the servers aren’t consecutive or require different credentials.
If you edited the table, click Verify IPMI IPs to verify that the WMS can log into the BMCs and detect the manufacturer (Brand column).
Verify that all is correct, and then click Next.
Click Fill OS Table to populate the table. The WMS automatically generates names and IPs.
Verify that the OS IP settings are correct. You can repeatedly click Fill OS Table to make adjustments.
Verify that all is correct, and then click Next.
You can repeatedly click Update Dataplanes to make adjustments.
Verify that all is correct, and then click Next.
In Step 8 - Save configuration files and inventory, click Save Files to save the configuration files, and then click Next.
In Step 9 - Prepare ISO for installation, click Prepare ISO for install. The WMS updates the kickstart on the ISO to match the WMS deployment data (it takes about 30 seconds).
/etc/hosts














IPMI_IP,Username,Password,OS_Mgmt_IP,Hostname,OS_Netmask,OS_Gateway,MTU,DNS,Hostname_Pattern,Hostname_Startnum,Server_Count,Data1_IP,Data1_Type,Data1_Netmask,Data1_MTU,Data1_Gateway,Data2_IP,Data2_Type,Data2_Netmask,Data2_MTU,Data2_Gateway
172.29.1.63,ADMIN,ADMIN,10.10.20.11,weka01,24,10.10.20.1,1500,8.8.8.8,weka%02d,1,7,10.100.10.11,Ethernet,16,9000,,10.100.20.11,Ethernet,16,9000,
172.29.1.64,ADMIN,ADMIN,10.10.20.12,weka02,24,10.10.20.1,1500,8.8.8.8,weka%02d,1,7,10.100.10.12,Ethernet,16,9000,,10.100.20.12,Ethernet,16,9000,
172.29.1.65,ADMIN,ADMIN,10.10.20.13,weka03,24,10.10.20.1,1500,8.8.8.8,weka%02d,1,7,10.100.10.13,Ethernet,16,9000,,10.100.20.13,Ethernet,16,9000,
172.29.1.66,ADMIN,ADMIN,10.10.20.14,weka04,24,10.10.20.1,1500,8.8.8.8,weka%02d,1,7,10.100.10.14,Ethernet,16,9000,,10.100.20.14,Ethernet,16,9000,
172.29.1.67,ADMIN,ADMIN,10.10.20.15,weka05,24,10.10.20.1,1500,8.8.8.8,weka%02d,1,7,10.100.10.15,Ethernet,16,9000,,10.100.20.15,Ethernet,16,9000,
172.29.1.68,ADMIN,ADMIN,10.10.20.16,weka06,24,10.10.20.1,1500,8.8.8.8,weka%02d,1,7,10.100.10.16,Ethernet,16,9000,,10.100.20.16,Ethernet,16,9000,
172.29.1.69,ADMIN,ADMIN,10.10.20.17,weka07,24,10.10.20.1,1500,8.8.8.8,weka%02d,1,7,10.100.10.17,Ethernet,16,9000,,10.100.20.17,Ethernet,16,9000,
ssh root@<wms ip>cd /opt/ansible-install
# ./install_after_wsa_iso.sh./install_after_wsa_iso.sh$ ssh [email protected]
[email protected]'s password:
X11 forwarding request failed on channel 0
Welcome to the Weka Management Station!
Web console: <https://WekaMgmtServer:9090/> or <https://172.29.5.172:9090/>
Last login: Sat Jun 3 10:31:28 2023 from ::ffff:10.41.193.86
[root@WekaMgmtServer ~]# cd /opt/ansible-install/
[root@WekaMgmtServer ansible-install]#
# ./install_after_wsa_iso.sh






If the system is not prepared using the WMS, perform this procedure to set the networking and other tasks before configuring the WEKA cluster.
Once the hardware and software prerequisites are met, prepare the backend servers and clients for the WEKA system configuration.
This preparation consists of the following steps:
Enable SR-IOV (when required)
Related topics
To install Mellanox OFED, see .
To install Broadcom driver, see .
To install Intel driver, see .
Single Root I/O Virtualization (SR-IOV) enablement is mandatory in the following cases:
The servers are equipped with Intel NICs.
When working with client VMs where it is required to expose the virtual functions (VFs) of a physical NIC to the virtual NICs.
Related topic
The following example of the ifcfg script is a reference for configuring the Ethernet interface.
For the best performance, MTU 9000 (jumbo frame) is recommended. For jumbo frame configuration, refer to your switch vendor documentation.
Bring the interface up using the following command:
InfiniBand network configuration normally includes Subnet Manager (SM), but the procedure involved is beyond the scope of this document. However, it is important to be aware of the specifics of your SM configuration, such as partitioning and MTU, because they can affect the configuration of the endpoint ports in Linux. For best performance, MTU of 4092 is recommended.
Refer to the following ifcfg script when the IB network only has the default partition, i.e., "no pkey":
Bring the interface up using the following command:
Verify that the “default partition” connection is up, with all the attributes set:
On an InfiniBand network with a non-default partition number, p-key must be configured on the interface if the InfiniBand ports on your network are members of an InfiniBand partition other than the default (
ignore-carrierignore-carrier is a NetworkManager configuration option. When set, it keeps the network interface up even if the physical link is down. It’s useful when services need to bind to the interface address at boot.
Open the /etc/NetworkManager/NetworkManager.conf file to edit it.
Under the [main] section, add one of the following lines depending on the operating system:
For some versions of Rocky Linux, RHEL, and CentOS: ignore-carrier=*
Example for RockyLinux and RHEL 8.7:
Example for some other versions:
Restart the NetworkManager service for the changes to take effect.
Use a large-size ICMP ping to check the basic TCP/IP connectivity between the interfaces of the servers:
The-M do flag prohibits packet fragmentation, which allows verification of correct MTU configuration between the two endpoints.
-s 8972 is the maximum ICMP packet size that can be transferred with MTU 9000, due to the overhead of ICMP and IP protocols.
The following steps provide guidance for configuring dual-network links with policy-based routing on Linux systems. Adjust IP addresses and interface names according to your environment.
/etc/sysctl.confOpen the /etc/sysctl.conf file using a text editor.
Add the following lines at the end of the file to set minimal configurations per InfiniBand (IB) or Ethernet (Eth) interface:
Save the file.
Navigate to /etc/sysconfig/network-scripts/.
Create the file /etc/sysconfig/network-scripts/route-mlnx0 with the following content:
Create the file /etc/sysconfig/network-scripts/route-mlnx1 with the following content:
For Ethernet (ETH): To set up routing for Ethernet connections, use the following commands:
The route's first IP address in the provided commands represents the network's subnet to which the NIC is connected. The last address in the routing rules corresponds to the IP address of the NIC being configured, where eth1 is set to 10.10.10.1.
For InfiniBand (IB): To configure routing for InfiniBand connections, use the following commands:
The route's first IP address in the above commands signifies the network's subnet associated with the respective NIC. The last address in the routing rules corresponds to the IP address of the NIC being configured, where ib0 is set to 10.10.10.1.
Open the Netplan configuration file /etc/netplan/01-netcfg.yaml and adjust it:
After adjusting the Netplan configuration file, run the following commands:
Create /etc/sysconfig/network/ifrule-eth2 with:
Create /etc/sysconfig/network/ifrule-eth4 with:
Create /etc/sysconfig/network/scripts/ifup-route.eth2 with:
Create /etc/sysconfig/network/scripts/ifup-route.eth4 with:
Add the weka lines to /etc/iproute2/rt_tables:
Restart the interfaces or reboot the machine:
Related topic
The synchronization of time on computers and networks is considered good practice and is vitally important for the stability of the WEKA system. Proper timestamp alignment in packets and logs is very helpful for the efficient and quick resolution of issues.
Configure the clock synchronization software on the backends and clients according to the specific vendor instructions (see your OS documentation), before installing the WEKA software.
The WEKA system autonomously manages NUMA balancing, making optimal decisions. Therefore, turning off the Linux kernel’s NUMA balancing feature is a mandatory requirement to prevent extra latencies in operations. It’s crucial that the disabled NUMA balancing remains consistent and isn’t altered by a server reboot.
To persistently disable NUMA balancing, follow these steps:
Open the file located at: /etc/sysctl.conf
Append the following line: kernel.numa_balancing=0
WEKA highly recommends that any servers used as backends have no swap configured. This is distribution-dependent but is often a case of commenting out any swap entries in /etc/fstab and rebooting.
The wekachecker is a tool that validates the readiness of the servers in the cluster before installing the WEKA software.
The wekachecker performs the following validations:
Dataplane IP, jumbo frames, and routing
ssh connection to all servers
Timesync
OS release
Procedure
Download the wekachecker tarball from and extract it.
From the install directory, run ./wekachecker <hostnames/IPs>
Where:
The hostnames/IPs is a space-separated list of all the cluster hostnames or IP addresses connected to the high-speed networking.
Example:
./wekachecker 10.1.1.11 10.1.1.12 10.1.1.4 10.1.1.5 10.1.1.6 10.1.1.7 10.1.1.8
Once the report has no failures or warnings that must be fixed, you can install the WEKA software.
If you can use the WEKA Configurator, go to:
Otherwise, go to:
0x7FFFExample: If the partition number is 0x2, the limited member p-key will equal the p-key itself, i.e.,0x2. The full member p-key will be calculated as the logical OR of 0x8000 and the p-key (0x2) and therefore will be equal to 0x8002.
For each pkey-ed IPoIB interface, it's necessary to create two ifcfg scripts. To configure your own pkey-ed IPoIB interface, refer to the following examples, where a pkey of 0x8002 is used. You may need to manually create the child device.
Bring the interface up using the following command:
Verify the connection is up with all the non-default partition attributes set:
For some other versions: ignore-carrier=<device-name1>,<device-name2>.
Replace <device-name1>,<device-name2> with the actual device names you want to apply this setting to.
Create the files /etc/sysconfig/network-scripts/rule-mlnx0 and /etc/sysconfig/network-scripts/rule-mlnx1 with the following content:
Open /etc/iproute2/rt_tables and add the following lines:
Save the changes.
Sufficient capacity in /opt/weka
Available RAM
Internet connection availability
NTP
DNS configuration
Firewall rules
WEKA required packages
OFED required packages
Recommended packages
HT/AMT is disabled
The kernel is supported
CPU has a supported AES, and it is enabled
Numa balancing is enabled
RAM state
XFS FS type installed
Mellanox OFED is installed
IOMMU mode for SSD drives is disabled
rpcbind utility is enabled
SquashFS is enabled
noexec mount option on /tmp
wekachecker writes any failures or warnings to the file: test_results.txt.table weka1 from 10.90.0.1
table weka2 from 10.90.1.1100 weka1
101 weka2TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="none"
DEFROUTE="no"
IPV4_FAILURE_FATAL="no"
IPV6INIT="no"
IPV6_AUTOCONF="no"
IPV6_DEFROUTE="no"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="enp24s0"
DEVICE="enp24s0"
ONBOOT="yes"
NM_CONTROLLED=no
IPADDR=192.168.1.1
NETMASK=255.255.0.0
MTU=9000# ifup enp24s0TYPE=Infiniband
ONBOOT=yes
BOOTPROTO=static
STARTMODE=auto
USERCTL=no
NM_CONTROLLED=no
DEVICE=ib1
IPADDR=192.168.1.1
NETMASK=255.255.0.0
MTU=4092# ifup ib1# ip a s ib1
4: ib1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 4092 qdisc mq state UP group default qlen 256
link/infiniband 00:00:03:72:fe:80:00:00:00:00:00:00:24:8a:07:03:00:a8:09:48
brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff
inet 10.0.20.84/24 brd 10.0.20.255 scope global noprefixroute ib0
valid_lft forever preferred_lft forever[main]
ignore-carrier=*[main]
ignore-carrier=ib0,ib1# ping -M do -s 8972 -c 3 192.168.1.2
PING 192.168.1.2 (192.168.1.2) 8972(9000) bytes of data.
8980 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.063 ms
8980 bytes from 192.168.1.2: icmp_seq=2 ttl=64 time=0.087 ms
8980 bytes from 192.168.1.2: icmp_seq=3 ttl=64 time=0.075 ms
--- 192.168.2.0 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.063/0.075/0.087/0.009 ms# Minimal configuration, set per IB/Eth interface
net.ipv4.conf.ib0.arp_announce = 2
net.ipv4.conf.ib1.arp_announce = 2
net.ipv4.conf.ib0.arp_filter = 1
net.ipv4.conf.ib1.arp_filter = 1
net.ipv4.conf.ib0.arp_ignore = 1
net.ipv4.conf.ib1.arp_ignore = 1
# As an alternative set for all interfaces by default
net.ipv4.conf.all.arp_filter = 1
net.ipv4.conf.default.arp_filter = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.default.arp_ignore = 110.90.0.0/16 dev mlnx0 src 10.90.0.1 table weka1
default via 10.90.2.1 dev mlnx0 table weka110.90.0.0/16 dev mlnx1 src 10.90.1.1 table weka2
default via 10.90.2.1 dev mlnx1 table weka2nmcli connection modify eth1 ipv4.routes "10.10.10.0/24 src=10.10.10.1 table=100" ipv4.routing-rules "priority 101 from 10.10.10.1 table 100"
nmcli connection modify eth2 ipv4.routes "10.10.10.0/24 src=10.10.10.101 table=200" ipv4.routing-rules "priority 102 from 10.10.10.101 table 200"nmcli connection modify ib0 ipv4.route-metric 100
nmcli connection modify ib1 ipv4.route-metric 101
nmcli connection modify ib0 ipv4.routes "10.10.10.0/24 src=10.10.10.1 table=100"
nmcli connection modify ib0 ipv4.routing-rules "priority 101 from 10.10.10.1 table 100"
nmcli connection modify ib1 ipv4.routes "10.10.10.0/24 src=10.10.10.101 table=200"
nmcli connection modify ib1 ipv4.routing-rules "priority 102 from 10.10.10.101 table 200"
nmcli connection modify ib1 ipv4.route-metric 101
nmcli connection modify ib0 ipv4.routes "10.10.10.0/24 table=100"
nmcli connection modify ib0 ipv4.routing-rules "priority 101 from 10.10.10.1 table 100"
nmcli connection modify ib1 ipv4.routes "10.10.10.0/24 table=200"
nmcli connection modify ib1 ipv4.routing-rules "priority 102 from 10.10.10.101 table 200"network:
version: 2
renderer: networkd
ethernets:
enp2s0:
dhcp4: true
nameservers:
addresses: [8.8.8.8]
ib1:
addresses:
[10.222.0.10/24]
routes:
- to: 10.222.0.0/24
via: 10.222.0.10
table: 100
routing-policy:
- from: 10.222.0.10
table: 100
priority: 32764
ignore-carrier: true
ib2:
addresses:
[10.222.0.20/24]
routes:
- to: 10.222.0.0/24
via: 10.222.0.20
table: 101
routing-policy:
- from: 10.222.0.20
table: 101
priority: 32765
ignore-carrier: true
ip route add 10.222.0.0/24 via 10.222.0.10 dev ib1 table 100
ip route add 10.222.0.0/24 via 10.222.0.20 dev ib2 table 101ipv4 from 192.168.11.21 table 100ipv4 from 192.168.11.31 table 101ip route add 192.168.11.0/24 dev eth2 src 192.168.11.21 table weka1ip route add 192.168.11.0/24 dev eth4 src 192.168.11.31 table weka2100 weka1
101 weka2ifdown eth2; ifdown eth4; ifup eth2; ifup eth4Dataplane IP Jumbo Frames/Routing test [PASS]
Check ssh to all hosts [PASS]
Verify timesync [PASS]
Check if OS has SELinux disabled or in permissive mode [PASS]
Check OS Release... [PASS]
Check /opt/weka for sufficient capacity... [WARN]
Check available RAM... [PASS]
Check if internet connection available... [PASS]
Check for NTP... [PASS]
Check DNS configuration... [PASS]
Check Firewall rules... [PASS]
Check for WEKA Required Packages... [PASS]
Check for OFED Required Packages... [PASS]
Check for Recommended Packages... [WARN]
Check if HT/AMT is disabled [WARN]
Check if kernel is supported... [PASS]
Check if CPU has AES enabled and supported [PASS]
Check if Network Manager is disabled [WARN]
Checking if Numa balancing is enabled [WARN]
Checking RAM state for errors [PASS]
Check for XFS FS type installed [PASS]
Check if Mellanox OFED is installed [PASS]
Check for IOMMU disabled [PASS]
Check for rpcbind enabled [PASS]
Check for squashfs enabled [PASS]
Check for /tmp noexec mount [PASS]
RESULTS: 21 Tests Passed, 0 Failed, 5 WarningsTYPE=Infiniband
ONBOOT=yes
MTU=4092
BOOTPROTO=static
STARTMODE=auto
USERCTL=no
NM_CONTROLLED=no
DEVICE=ib1TYPE=Infiniband
BOOTPROTO=none
CONNECTED_MODE=yes
DEVICE=ib1.8002
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
MTU=4092
NAME=ib1.8002
NM_CONTROLLED=no
ONBOOT=yes
PHYSDEV=ib1
PKEY_ID=2
PKEY=yes
BROADCAST=192.168.255.255
NETMASK=255.255.0.0
IPADDR=192.168.1.1# ifup ib1.8002# ip a s ib1.8002
5: ib1.8002@ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 4092 qdisc mq state UP qlen 256
link/infiniband 00:00:11:03:fe:80:00:00:00:00:00:00:24:8a:07:03:00:a8:09:48 brd 00:ff:ff:ff:ff:12:40:1b:80:02:00:00:00:00:00:00:ff:ff:ff:ff
inet 192.168.1.1/16 brd 192.168.255.255 scope global noprefixroute ib1.8002
valid_lft forever preferred_lft foreversysctl -p /etc/sysctl.confThis page describes a series of tests for measuring performance after the installation of the WEKA system. The same tests can be used to test the performance of any other storage solution.
There are three main performance metrics when measuring a storage system's performance:
Latency, which is the time from operation initiation to completion
The number of different IO operations (read/write/metadata) that the system can process concurrently
The bandwidth of data that the system can process concurrently
Each performance metric applies to read operations, write operations, or a mixture of read and write operations.
When measuring the WEKA system performance, different produce different performance characteristics. Additionally, client network configuration (using user-space DPDK networking or kernel UDP) significantly affects performance.
The is a generic open-source storage performance testing tool that can be defined as described . In this documentation, the usage of FIO version 3.20 is assumed.
All FIO testing is done using the client/server capabilities of FIO. This makes multiple-client testing easier since FIO reports aggregated results for all clients under the test. Single-client tests are run the same way to keep the results consistent.
Start the FIO server on every one of the clients:
Run the test command from one of the clients, note, the clients need to be mounted to a WEKA filesystem.
An example of launching a test (sometest) on all clients in a file (clients.txt) using the server/client model:
An example for the clients' file, when running multiple clients:
An example of aggregated test results:
The single-client or aggregated tests differ in the clients participating in the test, as defined in the clients.txt.
MDTest is a generic open-source metadata performance testing tool. In this documentation, the usage of version 1.9.3 is assumed.
MDTest uses an MPI framework to coordinate the job across multiple nodes. The results presented here were generated using the version 3.3.2 and can be defined as described . While it's possible to have variations with different MPI versions, most are based on the same ROMIO and will perform similarly.
Overall, the tests contained on this page are designed to show off the sustainable peak performance of the filesystem. Care has been taken to make sure they are realistic and reproducible.
Where possible, the benchmarks try to negate the effects of caching. For file testing, o_direct calls are used to bypass the client's cache. In the case of metadata testing, each phase of testing uses different clients. Also, between each test, the Linux caches are flushed to ensure all data being accessed is not present in the cache. While applications will often take advantage of cached data and metadata, this testing focuses on the filesystem's ability to deliver data independent of caching on the client.
While we provide below the output of one iteration, we ran each test several times and provided the average results in the following results summary.
This test measures the client throughput for large (1MB) reads. The job below tries to maximize the read throughput from a single client. The test utilizes multiple threads, each one performing 1 MB reads.
In this test output example, results show a bandwidth of 8.95 GiB/s from a single client.
This test measures the client throughput for large (1MB) writes. The job below tries to maximize the write throughput from a single client. The test utilizes multiple threads, each one performing 1MB writes.
In this test output example, results show a bandwidth of 6.87 GiB/s.
This test measures the ability of the client to deliver concurrent 4KB reads. The job below tries to maximize the system read IOPS from a single client. The test utilizes multiple threads, each one performing 4KB reads.
In this test output example, results show 390,494 IOPS from a single client.
This test measures the ability of the client to deliver concurrent 4KB writes. The job below tries to maximize the system write IOPS from a single client. The test utilizes multiple threads, each one performing 4KB writes.
In this test output example, results show 288,215 IOPS from a single client.
This test measures the minimal achievable read latency under a light load. The test measures the latency over a single-threaded sequence of 4KB reads across multiple files. Each read is executed only after the previous read has been served.
In this test output example, results show an average latency of 229 microseconds, where 99.5% of the writes terminated in 334 microseconds or less.
This test measures the minimal achievable write latency under a light load. The test measures the latency over a single-threaded sequence of 4KB writes across multiple files. Each write is executed only after the previous write has been served.
In this test output example, results show an average latency of 226 microseconds, where 99.5% of the writes terminated in 293 microseconds or less.
The test measures the rate of metadata operations (such as create, stat, delete) across the cluster. The test uses 20 million files: it uses 8 clients, and multiple threads per client are utilized (136), where each thread handles 18382 files. It is invoked 3 times and provides a summary of the iterations.
If it is preferred to run all the tests sequentially and review the results afterward, follow the instructions below.
From each client, create a mount point in /mnt/weka to a Weka filesystem and create there the following directories:
Copy the FIOmaster.txt file to your server and create the clients.txt file with your clients' hostnames.
Run the benchmarks using the following commands:
302,333 ops/s
378,667 ops/s
Read Latency
272 µs avg.
99.5% completed under 459 µs
144.76 µs avg.
99.5% completed under 260 µs
Write Latency
298 µs avg.
99.5% completed under 432 µs
107.12 µs avg.
99.5% completed under 142 µs
1,317,000 ops/s
Creates
79,599 ops/s
234,472 ops/s
Stats
1,930,721 ops/s
3,257,394 ops/s
Deletes
117,644 ops/s
361,755 ops/s
Read Throughput
8.9 GiB/s
21.4 GiB/s
Write Throughput
9.4 GiB/s
17.2 GiB/s
Read IOPS
393,333 ops/s
563,667 ops/s
Benchmark
Read Throughput
36.2 GiB/s
123 GiB/s
Write Throughput
11.6 GiB/s
37.6 GiB/s
Read IOPS
1,978,330 ops/s
4,346,330 ops/s
Write IOPS
Write IOPS
404,670 ops/s
fio --server --daemonize=/tmp/fio.pidfio --client=clients.txt sometest.txtweka-client-01
weka-client-02
weka-client-03
weka-client-04
weka-client-05
weka-client-06
weka-client-07
weka-client-08All clients: (groupid=0, jobs=16): err= 0: pid=0: Wed Jun 3 22:10:46 2020
read: IOPS=30.1k, BW=29.4Gi (31.6G)(8822GiB/300044msec)
slat (nsec): min=0, max=228000, avg=6308.42, stdev=4988.75
clat (usec): min=1132, max=406048, avg=16982.89, stdev=27664.80
lat (usec): min=1147, max=406051, avg=16989.20, stdev=27664.25
bw ( MiB/s): min= 3576, max=123124, per=93.95%, avg=28284.95, stdev=42.13, samples=287520
iops : min= 3576, max=123124, avg=28284.82, stdev=42.13, samples=287520
lat (msec) : 2=6.64%, 4=56.55%, 10=8.14%, 20=4.42%, 50=13.81%
lat (msec) : 100=7.01%, 250=3.44%, 500=0.01%
cpu : usr=0.11%, sys=0.09%, ctx=9039177, majf=0, minf=8088
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=9033447,0,0,0 short=0,0,0,0 dropped=0,0,0,0[global]
filesize=128G
time_based=1
numjobs=32
startdelay=5
exitall_on_error=1
create_serialize=0
filename_format=$jobnum/$filenum/bw.$jobnum.$filenum
directory=/mnt/weka/fio
group_reporting=1
clocksource=gettimeofday
runtime=300
ioengine=posixaio
disk_util=0
iodepth=1
[read_throughput]
bs=1m
rw=read
direct=1
new_groupread_throughput: (groupid=0, jobs=32): err= 0: pid=70956: Wed Jul 8 13:27:48 2020
read: IOPS=9167, BW=9167MiB/s (9613MB/s)(2686GiB/300004msec)
slat (nsec): min=0, max=409000, avg=3882.55, stdev=3631.79
clat (usec): min=999, max=14947, avg=3482.93, stdev=991.25
lat (usec): min=1002, max=14949, avg=3486.81, stdev=991.16
clat percentiles (usec):
| 1.00th=[ 1795], 5.00th=[ 2147], 10.00th=[ 2376], 20.00th=[ 2671],
| 30.00th=[ 2900], 40.00th=[ 3130], 50.00th=[ 3359], 60.00th=[ 3589],
| 70.00th=[ 3851], 80.00th=[ 4178], 90.00th=[ 4752], 95.00th=[ 5342],
| 99.00th=[ 6521], 99.50th=[ 7046], 99.90th=[ 8160], 99.95th=[ 8717],
| 99.99th=[ 9896]
bw ( MiB/s): min= 7942, max=10412, per=100.00%, avg=9179.14, stdev=12.41, samples=19168
iops : min= 7942, max=10412, avg=9179.14, stdev=12.41, samples=19168
lat (usec) : 1000=0.01%
lat (msec) : 2=2.76%, 4=72.16%, 10=25.07%, 20=0.01%
cpu : usr=0.55%, sys=0.34%, ctx=2751410, majf=0, minf=490
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=2750270,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1[global]
filesize=128G
time_based=1
numjobs=32
startdelay=5
exitall_on_error=1
create_serialize=0
filename_format=$jobnum/$filenum/bw.$jobnum.$filenum
directory=/mnt/weka/fio
group_reporting=1
clocksource=gettimeofday
runtime=300
ioengine=posixaio
disk_util=0
iodepth=1
[write_throughput]
bs=1m
rw=write
direct=1
new_groupwrite_throughput: (groupid=0, jobs=32): err= 0: pid=71903: Wed Jul 8 13:43:15 2020
write: IOPS=7034, BW=7035MiB/s (7377MB/s)(2061GiB/300005msec); 0 zone resets
slat (usec): min=12, max=261, avg=39.22, stdev=12.92
clat (usec): min=2248, max=20882, avg=4505.62, stdev=1181.45
lat (usec): min=2318, max=20951, avg=4544.84, stdev=1184.64
clat percentiles (usec):
| 1.00th=[ 2769], 5.00th=[ 2999], 10.00th=[ 3163], 20.00th=[ 3458],
| 30.00th=[ 3752], 40.00th=[ 4047], 50.00th=[ 4359], 60.00th=[ 4686],
| 70.00th=[ 5014], 80.00th=[ 5407], 90.00th=[ 5997], 95.00th=[ 6587],
| 99.00th=[ 8160], 99.50th=[ 8979], 99.90th=[10945], 99.95th=[12125],
| 99.99th=[14746]
bw ( MiB/s): min= 5908, max= 7858, per=100.00%, avg=7043.58, stdev= 9.37, samples=19168
iops : min= 5908, max= 7858, avg=7043.58, stdev= 9.37, samples=19168
lat (msec) : 4=38.87%, 10=60.90%, 20=0.22%, 50=0.01%
cpu : usr=1.34%, sys=0.15%, ctx=2114914, majf=0, minf=473
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,2110493,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1[global]
filesize=4G
time_based=1
numjobs=192
startdelay=5
exitall_on_error=1
create_serialize=0
filename_format=$jobnum/$filenum/iops.$jobnum.$filenum
directory=/mnt/weka/fio
group_reporting=1
clocksource=gettimeofday
runtime=300
ioengine=posixaio
disk_util=0
iodepth=1
[read_iops]
bs=4k
rw=randread
direct=1
new_groupread_iops: (groupid=0, jobs=192): err= 0: pid=66528: Wed Jul 8 12:30:38 2020
read: IOPS=390k, BW=1525MiB/s (1599MB/s)(447GiB/300002msec)
slat (nsec): min=0, max=392000, avg=3512.56, stdev=2950.62
clat (usec): min=213, max=15496, avg=486.61, stdev=80.30
lat (usec): min=215, max=15505, avg=490.12, stdev=80.47
clat percentiles (usec):
| 1.00th=[ 338], 5.00th=[ 375], 10.00th=[ 400], 20.00th=[ 424],
| 30.00th=[ 445], 40.00th=[ 465], 50.00th=[ 482], 60.00th=[ 498],
| 70.00th=[ 519], 80.00th=[ 545], 90.00th=[ 586], 95.00th=[ 619],
| 99.00th=[ 685], 99.50th=[ 717], 99.90th=[ 783], 99.95th=[ 816],
| 99.99th=[ 1106]
bw ( MiB/s): min= 1458, max= 1641, per=100.00%, avg=1525.52, stdev= 0.16, samples=114816
iops : min=373471, max=420192, avg=390494.54, stdev=40.47, samples=114816
lat (usec) : 250=0.01%, 500=60.20%, 750=39.60%, 1000=0.19%
lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%
cpu : usr=1.24%, sys=1.52%, ctx=117366459, majf=0, minf=3051
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=117088775,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1[global]
filesize=4G
time_based=1
numjobs=192
startdelay=5
exitall_on_error=1
create_serialize=0
filename_format=$jobnum/$filenum/iops.$jobnum.$filenum
directory=/mnt/weka/fio
group_reporting=1
clocksource=gettimeofday
runtime=300
ioengine=posixaio
disk_util=0
iodepth=1
[write_iops]
bs=4k
rw=randwrite
direct=1
new_groupwrite_iops: (groupid=0, jobs=192): err= 0: pid=72163: Wed Jul 8 13:48:24 2020
write: IOPS=288k, BW=1125MiB/s (1180MB/s)(330GiB/300003msec); 0 zone resets
slat (nsec): min=0, max=2591.0k, avg=5030.10, stdev=4141.48
clat (usec): min=219, max=17801, avg=659.20, stdev=213.57
lat (usec): min=220, max=17803, avg=664.23, stdev=213.72
clat percentiles (usec):
| 1.00th=[ 396], 5.00th=[ 441], 10.00th=[ 474], 20.00th=[ 515],
| 30.00th=[ 553], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 668],
| 70.00th=[ 701], 80.00th=[ 750], 90.00th=[ 840], 95.00th=[ 971],
| 99.00th=[ 1450], 99.50th=[ 1614], 99.90th=[ 2409], 99.95th=[ 3490],
| 99.99th=[ 4359]
bw ( MiB/s): min= 1056, max= 1224, per=100.00%, avg=1125.96, stdev= 0.16, samples=114816
iops : min=270390, max=313477, avg=288215.11, stdev=40.70, samples=114816
lat (usec) : 250=0.01%, 500=15.96%, 750=63.43%, 1000=16.05%
lat (msec) : 2=4.41%, 4=0.14%, 10=0.02%, 20=0.01%
cpu : usr=1.21%, sys=1.49%, ctx=86954124, majf=0, minf=3055
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,86398871,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1[global]
filesize=4G
time_based=1
startdelay=5
exitall_on_error=1
create_serialize=0
filename_format=$jobnum/$filenum/iops.$jobnum.$filenum
directory=/mnt/weka/fio
group_reporting=1
clocksource=gettimeofday
runtime=300
ioengine=posixaio
disk_util=0
iodepth=1
[read_latency]
numjobs=1
bs=4k
rw=randread
direct=1
new_groupread_latency: (groupid=0, jobs=1): err= 0: pid=71741: Wed Jul 8 13:38:06 2020
read: IOPS=4318, BW=16.9MiB/s (17.7MB/s)(5061MiB/300001msec)
slat (nsec): min=0, max=53000, avg=1923.23, stdev=539.64
clat (usec): min=160, max=1743, avg=229.09, stdev=44.80
lat (usec): min=162, max=1746, avg=231.01, stdev=44.80
clat percentiles (usec):
| 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 188],
| 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 233], 60.00th=[ 245],
| 70.00th=[ 255], 80.00th=[ 269], 90.00th=[ 289], 95.00th=[ 318],
| 99.00th=[ 330], 99.50th=[ 334], 99.90th=[ 355], 99.95th=[ 437],
| 99.99th=[ 529]
bw ( KiB/s): min=16280, max=17672, per=100.00%, avg=17299.11, stdev=195.37, samples=599
iops : min= 4070, max= 4418, avg=4324.78, stdev=48.84, samples=599
lat (usec) : 250=66.18%, 500=33.80%, 750=0.02%, 1000=0.01%
lat (msec) : 2=0.01%
cpu : usr=0.95%, sys=1.44%, ctx=1295670, majf=0, minf=13
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=1295643,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1[global]
filesize=4G
time_based=1
startdelay=5
exitall_on_error=1
create_serialize=0
filename_format=$jobnum/$filenum/iops.$jobnum.$filenum
directory=/mnt/weka/fio
group_reporting=1
clocksource=gettimeofday
runtime=300
ioengine=posixaio
disk_util=0
iodepth=1
[write_latency]
numjobs=1
bs=4k
rw=randwrite
direct=1
new_groupwrite_latency: (groupid=0, jobs=1): err= 0: pid=72709: Wed Jul 8 13:53:33 2020
write: IOPS=4383, BW=17.1MiB/s (17.0MB/s)(5136MiB/300001msec); 0 zone resets
slat (nsec): min=0, max=56000, avg=1382.96, stdev=653.78
clat (usec): min=195, max=9765, avg=226.21, stdev=109.45
lat (usec): min=197, max=9766, avg=227.59, stdev=109.46
clat percentiles (usec):
| 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 215], 20.00th=[ 217],
| 30.00th=[ 219], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 223],
| 70.00th=[ 225], 80.00th=[ 229], 90.00th=[ 233], 95.00th=[ 243],
| 99.00th=[ 269], 99.50th=[ 293], 99.90th=[ 725], 99.95th=[ 2540],
| 99.99th=[ 6063]
bw ( KiB/s): min=16680, max=18000, per=100.00%, avg=17555.48, stdev=279.31, samples=599
iops : min= 4170, max= 4500, avg=4388.87, stdev=69.83, samples=599
lat (usec) : 250=96.27%, 500=3.61%, 750=0.03%, 1000=0.01%
lat (msec) : 2=0.03%, 4=0.03%, 10=0.03%
cpu : usr=0.93%, sys=1.52%, ctx=1315723, majf=0, minf=14
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,1314929,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1mpiexec -f <hostfile> -np 1088 mdtest -v -N 136 -i 3 -n 18382 -F -u -d /mnt/weka/mdtestSUMMARY rate: (of 3 iterations)
Operation Max Min Mean Std Dev
--------- --- --- ---- -------
File creation : 40784.448 40784.447 40784.448 0.001
File stat : 2352915.997 2352902.666 2352911.311 6.121
File read : 217236.252 217236.114 217236.162 0.064
File removal : 44101.905 44101.896 44101.902 0.004
Tree creation : 3.788 3.097 3.342 0.316
Tree removal : 1.192 1.142 1.172 0.022# create directories in the weka filesystem
mkdir /mnt/weka/fio
mkdir /mnt/weka/mdtest# single client
fio FIOmaster.txt
# multiple clients
fio --client=clients.txt FIOmaster.txt
# mdtest
mpiexec -f clients.txt -np 1088 mdtest -v -N 136 -i 3 -n 18382 -F -u -d /mnt/weka/mdtestThis page describes how to view and manage object stores using the CLI.
Using the CLI, you can perform the following actions:
Command: weka fs tier obs
Use this command to view information on all the object stores configured to the WEKA system.
Command: weka fs tier obs update
Use the following command line to edit an object store:
weka fs tier obs update <name> [--new-name new-name] [--site site] [--hostname=<hostname>] [--port=<port>] [--auth-method=<auth-method>] [--region=<region>] [--access-key-id=<access-key-id>] [--secret-key=<secret-key>] [--protocol=<protocol>] [--bandwidth=<bandwidth>] [--download-bandwidth=<download-bandwidth>] [--upload-bandwidth=<upload-bandwidth>] [--remove-bandwidth=<remove-bandwidth>] [--max-concurrent-downloads=<max-concurrent-downloads>] [--max-concurrent-uploads=<max-concurrent-uploads>] [--max-concurrent-removals=<max-concurrent-removals>] [--enable-upload-tags=<enable-upload-tags>]
Parameters
name *
Name of the object store to create.
new-name
New name for the object store.
site
Site location of the object store.
Possible values:
local - for tiering+snapshots
remote - for snapshots only
hostname
Object store host identifier (hostname or IP address) to use as a default for added buckets.
port
Object store port, to be used as a default for added buckets.
Command: weka fs tier s3
Use this command to view information on all the object store buckets configured to the WEKA system.
Command: weka fs tier s3 add
Use the following command line to add an S3 object store:
weka fs tier s3 add <name> [--site site] [--obs-name obs-name] [--hostname=<hostname>] [--port=<port> [--bucket=<bucket>] [--auth-method=<auth-method>] [--region=<region>] [--access-key-id=<access-key-id>] [--secret-key=<secret-key>] [--protocol=<protocol>] [--bandwidth=<bandwidth>] [--download-bandwidth=<download-bandwidth>] [--remove-bandwidth=<remove-bandwidth>] [--upload-bandwidth=<upload-bandwidth>] [--errors-timeout=<errors-timeout>] [--prefetch-mib=<prefetch-mib>] [--enable-upload-tags=<enable-upload-tags>] [--max-concurrent-downloads=<max-concurrent-downloads>] [--max-concurrent-uploads=<max-concurrent-uploads>] [--max-concurrent-removals=<max-concurrent-removals>] [--max-extents-in-data-blob=<max-extents-in-data-blob>] [--max-data-blob-size=<max-data-blob-size>] [--sts-operation-type=<sts-operation-type>] [--sts-role-arn=<sts-role-arn>] [--sts-role-session-name=<sts-role-session-name>] [--sts-session-duration=<sts-session-duration>]
Parameters
name*
Name of the object store to edit.
site
local - for tiering+snapshots,
remote - for snapshots only.
It must be the same as the object store site it is added to (obs-name).
local
obs-name
Name of the existing object store to add this object store bucket to.
If there is only one object store of type mentioned in site it is chosen automatically
The max-concurrent settings are applied per WEKA compute process and the minimum setting of all object stores is applied.
When you create the object store bucket in AWS, to use the storage classes: S3 Intelligent-Tiering, S3 Standard-IA, S3 One Zone-IA, and S3 Glacier Instant Retrieval, do the following:
Create the bucket in S3 Standard.
Create an AWS lifecycle policy to transition objects to these storage classes.
Make the relevant changes and click Update to update the object store bucket.
Command: weka fs tier s3 update
Use the following command line to edit an object store bucket:
weka fs tier s3 update <name> [--new-name=<new-name>] [--new-obs-name new-obs-name] [--hostname=<hostname>] [--port=<port> [--bucket=<bucket>] [--auth-method=<auth-method>] [--region=<region>] [--access-key-id=<access-key-id>] [--secret-key=<secret-key>] [--protocol=<protocol>] [--bandwidth=<bandwidth>] [--download-bandwidth=<download-bandwidth>] [--upload-bandwidth=<upload-bandwidth>] [--remove-bandwidth=<remove-bandwidth>] [--errors-timeout=<errors-timeout>] [--prefetch-mib=<prefetch-mib>] [--enable-upload-tags=<enable-upload-tags>] [--max-concurrent-downloads=<max-concurrent-downloads>] [--max-concurrent-uploads=<max-concurrent-uploads>] [--max-concurrent-removals=<max-concurrent-removals>] [--max-extents-in-data-blob=<max-extents-in-data-blob>] [--max-data-blob-size=<max-data-blob-size>] [--sts-operation-type=<sts-operation-type>] [--sts-role-arn=<sts-role-arn>] [--sts-role-session-name=<sts-role-session-name>] [--sts-session-duration=<sts-session-duration>]
Parameters
name*
A valid name of the object store bucket to edit.
new-name
New name for the object store bucket
new-obs-name
A new object store name to add this object store bucket to. It must be an existing object store with the same site value.
hostname
Object store host identifier or IP.
port
A valid object store port
Command: weka fs tier ops
Use the following command line to list the recent operations running on an object store:
weka fs tier ops <name> [--format format] [--output output]...[--sort sort]...[--filter filter]...[--raw-units] [--UTC] [--no-header] [--verbose]
Parameters
name*
A valid object store bucket name to show its recent operations.
format
Specify the output format.
Possible values: view, csv, markdown, json, or oldview
view
output
Specify the columns in the output.
Possible values:
node, obsBucket, key, type, execution, phase, previous, start, size, results, errors, lastHTTP, concurrency,
All columns
Command: weka fs tier s3 delete
Use the following command line to delete an object store bucket:
weka fs tier s3 delete <name>
Parameters
name*
A valid name of the object store bucket to delete.
inodeauth-method
Authentication method to use as a default for added buckets.
Possible values: None,AWSSignature2,AWSSignature4
region
Region name to use as a default for added buckets.
access-key-id
Object store access key ID to use as a default for added buckets.
secret-key
Object store secret key to use as a default for added buckets.
protocol
Protocol type to use as a default for added buckets.
Possible values: HTTP,HTTPS,HTTPS_UNVERIFIED
bandwidth
Bandwidth limitation per core (Mbps).
download-bandwidth
Object store download bandwidth limitation per core (Mbps).
upload-bandwidth
Object store upload bandwidth limitation per core (Mbps).
remove-bandwidth
A bandwidth (Mbps) to limit the throughput of delete requests sent to the object store. Setting a bandwidth equal to or lower than the object store deletion throughput prevents an increase in the object store deletions queue.
max-concurrent-downloads
Maximum number of downloads concurrently performed on this object store in a single IO node.
Possible values: 1-64
max-concurrent-uploads
Maximum number of uploads concurrently performed on this object store in a single IO node.
Possible values: 1-64
max-concurrent-removals
Maximum number of removals concurrently performed on this object store in a single IO node.
Possible values: 1-64
enable-upload-tags
Determines whether to enable object-tagging or not. To use as a default for added buckets.
Possible values: true,false
hostname *
Object store host identifier or IP. Mandatory, if not specified at the object store level.
The hostname specified in obs-name if present
port
A valid object store port.
The port specified in obs-name if present, otherwise 80
bucket
A valid object store bucket name.
auth-method *
Authentication method.
Possible values: None, AWSSignature2, AWSSignature4.
Mandatory, if not specified in the object store level .
The auth-method specified in obs-name if present
region *
Region name. Mandatory, if not specified in the object store level .
The region specified in obs-name if present
access-key-id *
Object store bucket access key ID. Mandatory, if not specified in the object store level (can be left empty when using IAM role in AWS or GCP).
The access-key-id specified in obs-name if present
secret-key *
Object store bucket secret key. Mandatory, if not specified in the object store level (can be left empty when using IAM role in AWS or GCP).
The secret-key specified in obs-name if present
protocol
Protocol type to be used.
Possible values: HTTP, HTTPS or HTTPS_UNVERIFIED.
The protocol specified in obs-name if present, otherwiseHTTP
bandwidth
Bucket bandwidth limitation per core (Mbps).
download-bandwidth
Bucket download bandwidth limitation per core (Mbps)
upload-bandwidth
Bucket upload bandwidth limitation per core (Mbps)
remove-bandwidth
A bandwidth (Mbps) to limit the throughput of delete requests sent to the object store. Setting a bandwidth equal to or lower than the object store deletion throughput prevents an increase in the object store deletions queue.
errors-timeout
If the object store link is down longer than this timeout period, all IOs that need data return an error.
Possible values: 1m-15m, or 60s-900s.
For example, 300s.
300s
prefetch-mib
The data size (MiB) to prefetch when reading a whole MiB on the object store.
0
enable-upload-tags
Whether to enable object-tagging or not.
Possible values: true or false
false
max-concurrent-downloads
Maximum number of downloads we concurrently perform on this object store in a single IO node.
Possible values: 1-64
max-concurrent-uploads
Maximum number of uploads we concurrently perform on this object store in a single IO node.
Possible values: 1-64
max-concurrent-removals
Maximum number of removals we concurrently perform on this object store in a single IO node.
Possible values: 1-64
max-extents-in-data-blob
Maximum number of extents' data to upload to an object store data blob.
max-data-blob-size
Maximum size to upload to an object store data blob.
Format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB.
sts-operation-type
AWS STS operation type to use.
Possible values: assume_role or none
none
sts-role-arn
The Amazon Resource Name (ARN) of the role to assume. Mandatory when setting sts-operation to assume_role.
sts-role-session
A unique identifier for the assumed role session. The length must be between 2 and 64 characters. Allowed characters include alphanumeric characters (upper and lower case), underscore (_), equal sign (=), comma (,), period (.), at symbol (@), and hyphen (-). Space is not allowed.
sts-session-duration
The duration of the temporary security credentials in seconds.
Possible values: 900 - 43200.
3600
bucket
A valid object store bucket name
auth-method
Authentication method.
Possible values: None, AWSSignature2 or AWSSignature4
region
Region name
access-key-id
Object store bucket access key ID
secret-key
Object store bucket secret key
protocol
Protocol type to be used.
Possible values: HTTP, HTTPS or HTTPS_UNVERIFIED
bandwidth
Bandwidth limitation per core (Mbps)
download-bandwidth
Bucket download bandwidth limitation per core (Mbps)
upload-bandwidth
Bucket upload bandwidth limitation per core (Mbps)
remove-bandwidth
A bandwidth (Mbps) to limit the throughput of delete requests sent to the object store. Setting a bandwidth equal to or lower than the object store deletion throughput prevents an increase in the object store deletions queue.
errors-timeout
If the object store link is down longer than this timeout period, all IOs that need data return an error.
Possible values: 1m-15m, or 60s-900s.
For example, 300s.
prefetch-mib
The data size in MiB to prefetch when reading a whole MiB on the object store
enable-upload-tags
Whether to enable object-tagging or not.
Possible values: true, false
max-concurrent-downloads
Maximum number of downloads we concurrently perform on this object store in a single IO node.
Possible values: 1-64
max-concurrent-uploads
Maximum number of uploads we concurrently perform on this object store in a single IO node.
Possible values: 1-64
max-concurrent-removals
Maximum number of removals we concurrently perform on this object store in a single IO node.
Possible values: 1-64
max-extents-in-data-blob
Maximum number of extents' data to upload to an object store data blob.
max-data-blob-size
Maximum size to upload to an object store data blob.
Format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB.
sts-operation-type
AWS STS operation type to use.
Possible values: assume_role or none
sts-role-arn
The Amazon Resource Name (ARN) of the role to assume. Mandatory when setting sts-operation to assume_role.
sts-role-session
A unique identifier for the assumed role session. The length must be between 2 and 64 characters. Allowed characters include alphanumeric characters (upper and lower case), underscore (_), equal sign (=), comma (,), period (.), at symbol (@), and hyphen (-). Space is not allowed.
sts-session-duration
The duration of the temporary security credentials in seconds.
Possible values: 900 - 43200.
sort
Specify the column(s) to consider when sorting the output. For the sorting order, ascending or descending, add - or + signs respectively before the column name.
filter
Specify the values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
raw-units
Print values in a readable format of raw units such as bytes and seconds.
Possible value examples: 1KiB 234MiB 2GiB.
no-header
Don't show column headers in the output,
verbose
Show all columns in the output.
Explore the methods for mounting a filesystem on a client host using the WEKA filesystem driver, including the stateful client and stateless client methods.
There are two methods available for mounting a filesystem on a client host:
Stateful client: This method involves the following steps:
Install the WEKA client on the host.
Configure the client according to your requirements.
Join the client in a WEKA cluster.
Once the above steps are completed, you can mount the filesystem. For detailed instructions, see .
Stateless client: This method simplifies client management in the cluster by eliminating the need for the adding clients' process. For detailed instructions on how to use this feature to mount filesystems, see .
If you need to mount a single client to multiple clusters, see .
Before using the mount command, you must install the WEKA client, configure it, and join it to a WEKA cluster. This process involves adding clients, which can be done either for bare metal installation or as part of the WEKA deployment on one of the supported clouds.
Assuming the cluster has a filesystem named demo, you can add this filesystem to a server by SSHing into one of the servers and running the mount command as the root user:
The general syntax of the mount command for a WEKA filesystem is:
When mounting a filesystem on a cluster client, you have two options: read cache and write cache. See the respective sections to understand the differences between these modes.
Related topics
(on bare-metal servers)
(on AWS deployment)
(on GCP deployment)
The stateless client feature enhances cluster management by deferring the client’s joining process until the filesystem mount is performed. This simplification is especially advantageous in cloud deployments, where client turnover can be high.
This feature also consolidates all security aspects into the mount command, eliminating the need to search for separate credentials during cluster join and mount operations.
To use the stateless client feature, a WEKA agent must be installed. Once this is done, the mount command can be used to create and configure mounts. If needed, existing mounts can be removed from the cluster using the unmount command.
Assuming the WEKA cluster is using the backend IP of 1.2.3.4, running the following command as root on a client will install the agent:
On completion, the agent is installed on the client.
Command: mount -t wekafs
Use one of the following command lines to invoke the mount command. The delimiter between the server and filesystem can be either :/ or /:
Parameters
Each mount option can be passed by an individual -o flag to mount.
You can remount using the mount options marked as Remount Supported in the above table (mount -o remount).
When a mount option has been explicitly changed, you must set it again in the remount operation to ensure it retains its value. For example, if you mount with ro, a remount without it changes the mount option to the default rw. If you mount with rw, it is not required to re-specify the mount option because this is the default.
Example: On-Premise Installations
mount -t wekafs -o num_cores=1 -o net=ib0 backend-server-0/my_fs /mnt/weka
Running this command on a server installed with the Weka agent downloads the appropriate WEKA version from the backend-server-0and creates a WEKA container that allocates a single core and a named network interface (ib0). Then it joins the cluster that backend-server-0
Example: AWS Installations
mount -t wekafs -o num_cores=2 backend1,backend2,backend3/my_fs /mnt/weka
Running this command on an AWS EC2 instance allocates two cores (multiple-frontends), attaches and configures two ENIs on the new client. The client attempts to rejoin the cluster through all three backends specified in the command line.
For stateless clients, the first mount command installs the weka client software and joins the cluster). Any subsequent mount command, can either use the same syntax or just the traditional/per-mount parameters as defined in since it is not necessary to join a cluster.
It is now possible to access Weka filesystems via the mount-point, e.g., by cd /mnt/weka/ command.
After the execution of anumount command, which unmounts the last Weka filesystem, the client is disconnected from the cluster and will be uninstalled by the agent. Consequently, executing a new mount command requires the specification of the cluster, cores, and networking parameters again.
Mount options marked as Remount Supported in the above table can be remounted (using mount -o remount). When a mount option is not set in the remount operation, it will retain its current value. To set a mount option back to its default value, use the default modifier (e.g., memory_mb=default).
The defaults of the mount options qos_max_throughput_mbps and qos_preferred_throughput_mbps have no limit.
The cluster admin can set these default values to meet the organization's requirements, reset them to the initial default values (no limit), or show the existing values.
The mount option defaults are only relevant for new mounts performed and do not influence the existing ones.
Commands:
weka cluster mount-defaults set
weka cluster mount-defaults reset
weka cluster mount-defaults show
To set the mount option default values, run the following command:
Parameters
When using a stateless client, it is possible to alter and control many different networking options, such as:
Virtual functions
IPs
Gateway (in case the client is on a different subnet)
Physical network devices (for performance and HA)
Use -o net=<netdev> mount option with the various modifiers as described below.
<netdev> is either the name, MAC address, or PCI address of the physical network device (can be a bond device) to allocate for the client.
When using wekafs mounts, both clients and backends should use the same type of networking technology (either IB or Ethernet).
For higher performance, the usage of multiple Frontends may be required. When using a NIC other than Mellanox or Intel E810 or mounting a DPDK client on a VM, it is required to use to expose a VF of the physical device to the client. Once exposed, it can be configured via the mount command.
To assign the VF IP addresses or when the client resides in a different subnet and routing is needed in the data network, usenet=<netdev>/[ip]/[bits]/[gateway].
ip, bits, gateway are optional. If these options are not provided, the WEKA system performs one of the following depending on the environment:
Cloud environment: The WEKA system deduces the values of the ip, bits, gateway options.
On-premises environment: The WEKA system allocates values to the ip, bits, gateway options based on the cluster default network. Failure to set the default network may result in the WEKA cluster failing to allocate an IP address for the client.
Ensure that the WEKA cluster default data networking is configured prior to running the mount command. For details, see .
The following command configures two VFs for the device and assign each one of them to one of the frontend processes. The first container receives a 192.168.1.100 IP address, and the second uses a 192.168.1.101 IP address. Both IPs have 24 network mask bits and a default gateway of 192.168.1.254.
For performance or high availability, it is possible to use more than one physical network device.
It's easy to saturate the bandwidth of a single network interface when using WekaFS. For higher throughput, it is possible to leverage multiple network interface cards (NICs). The -o net notation shown in the examples above can be used to pass the names of specific NICs to the WekaFS server driver.
For example, the following command will allocate two cores and two physical network devices for increased throughput:
Multiple NICs can also be configured to achieve redundancy (for details, see the section) and higher throughput for a complete, highly available solution. For that, use more than one physical device as previously described, and also, specify the client management IPs using -o mgmt_ip=<ip>+<ip2> command-line option.
For example, the following command will use two network devices for HA networking and allocate both devices to four Frontend processes on the client. The modifier ha is used here, which stands for using the device on all processes.
Advanced mounting options for multiple physical network devices
With multiple Frontend processes (as expressed by -o num_cores), it is possible to control what processes use what NICs. This can be accomplished through the use of special command line modifiers called slots. In WekaFS, slot is synonymous with a process number. Typically, the first WekaFS Frontend process will occupy slot 1, then the second - slot 2 and so on.
Examples of slot notation include s1, s2, s2+1, s1-2, slots1+3, slot1, slots1-4 , where - specifies a range of devices, while + specifies a list. For example, s1-4 implies slots 1, 2, 3, and 4, while s1+4 specifies slots 1 and 4.
For example, in the following command, mlnx0 is bound to the second Frontend process whilemlnx1 to the first one for improved performance.
For example, in the following HA mounting command, two cores (two Frontend processes) and two physical network devices (mlnx0, mlnx1) are allocated. By explicitly specifying s2+1, s1-2 modifiers for network devices, both devices will be used by both Frontend processes. Notation s2+1 stands for the first and second processes, while s1-2 stands for the range of 1 to 2, and are effectively the same.
If DPDK cannot be used, you can use the WEKA filesystem UDP networking mode through the kernel (for details about UDP mode. see the section). Use net=udp in the mount command to set the UDP networking mode, for example:
Using the fstab (filesystem table) enables automatic remount after a reboot. This applies to stateless clients running on an OS that supports systemd, such as RHEL/CentOS 7.2 and up, Ubuntu 16.04 and up, and Amazon Linux 2 LTS.
If the mount point you want to set in the fstab is already mounted, unmount it before setting the fstab file.
If your WEKA version is 4.2.14 or lower, start with steps 1 and 2. Otherwise, skip to step 3.
Create the WEKA agent service: Create a file named weka-agent.service in /etc/systemd/system with the following content:
Enable and start the WEKA agent service: Run the following commands to enable and start the service:
For all versions
Create a mount point: Run the following command to create a mount point:
Edit the /etc/fstab file: Add the entry for the WEKA filesystem.
fstab structure
Example
fstab configuration parameters
Mount the filesystem: Test the fstab setting by running:
Reboot the server: Reboot the server to apply the fstab settings. The filesystem is automatically mounted after the reboot.
Autofs allows filesystems to be mounted dynamically when accessed and unmounted after a period of inactivity. This approach reduces system overhead and ensures efficient resource utilization. Follow these steps to configure autofs for mounting Weka filesystems.
Install autofs on the server: Install the autofs package based on your operating system:
For Red Hat or CentOS:
For Debian or Ubuntu:
Configure autofs for WEKA filesystems: Set up the autofs configuration files according to the client type:
No
Yes
dentry_max_age_positive
The time in milliseconds after which the system refreshes the metadata cached entry. This refresh informs the WEKA client about metadata changes performed by other clients.
1000
Yes
dentry_max_age_negative
Each time a file or directory lookup fails, the local entry cache creates an entry specifying that the file or directory does not exist. This entry is refreshed after the specified time (number in milliseconds), allowing the WEKA client to use files or directories created by other clients.
0
Yes
ro
Mount filesystem as read-only.
No
Yes
rw
Mount filesystem as read-write.
Yes
Yes
inode_bits
The inode size in bits may be required for 32-bit applications.
Possible values: 32, 64, or auto
Auto
No
verbose
Write debug logs to the console.
No
Yes
quiet
Don't show any logs to console.
No
Yes
acl
Can be defined per mount.
Setting POSIX ACLs can change the effective group permissions (via the mask permissions). When ACLs are defined but the mount has no ACL, the effective group permissions are granted.
No
No
obs_direct
See .
No
Yes
noatime
Do not update inode access times.
No
Yes
strictatime
Always update inode access times.
No
Yes
relatime
Update inode access times only on modification or change, or if inode has been accessed and relatime_threshold has passed.
Yes
Yes
relatime_threshold
The time (number in seconds) to wait since an inode has been accessed (not modified) before updating the access time.
0 means never update the access time on access only.
This option is relevant only if the relatime is on.
0 (infinite)
Yes
nosuid
Do not take suid/sgid bits into effect.
No
Yes
nodev
Do not interpret character or block special devices.
No
Yes
noexec
Do not allow direct execution of any binaries.
No
Yes
file_create_mask
File creation mask. A numeric (octal) notation of POSIX permissions.
Newly created file permissions are masked with the creation mask. For example, if a user creates a file with permissions=777 but the file_create_mask is 770, the file is created with 770 permissions.
First, the umask is taken into account, followed by the file_create_mask and then the force_file_mode.
0777
Yes
directory_create_mask
Directory creation mask. A numeric (octal) notation of POSIX permissions.
Newly created directory permissions are masked with the creation mask. For example, if a user creates a directory with permissions=777 but the directory_create_mask is 770, the directory will be created with 770 permissions.
First, the umask is taken into account, followed by the directory_create_mask and then the force_directory_mode.
0777
Yes
force_file_mode
Force file mode. A numeric (octal) notation of POSIX permissions.
Newly created file permissions are logically OR'ed with the mode.
For example, if a user creates a file with permissions 770 but the force_file_mode is 775, the resulting file is created with mode 775.
First, the umask is taken into account, followed by the file_create_mask and then the force_file_mode.
0
Yes
force_directory_mode
Force directory mode. A numeric (octal) notation of POSIX permissions.
Newly created directory permissions are logically OR'ed with the mode. For example, if a user creates a directory with permissions 770 but the force_directory_mode is 775, the resulting directory will be created with mode 775.
First, the umask is taken into account, followed by the directory_create_mask and then the force_directory_mode.
0
Yes
sync_on_close
This option ensures that all data for a file is written to the server when the file is closed. This means that changes made to the file by the client are immediately written to the server's disk upon close, which can provide greater data consistency and reliability.
It simulates the open-to-close semantics of NFS when working with writecache mount mode and directory quotas.
Enabling this option is essential when applications expect returned write errors at syscall close if the quota is exceeded.
No
Yes
nosync_on_close
This option disables the sync_on_close behavior of file writes. When nosync_on_close is enabled, the client does not wait for the server to confirm that all file data has been written to disk before closing the file.
This means that any changes made to the file by the client may not be immediately written to the server's disk when the file is closed. Instead, the changes are buffered in memory and written to disk asynchronously later.
No
Yes
No
net=<netdev>[/<ip>/<bits>[/<gateway>]]
This option must be specified for on-premises installation and must not be specified for AWS installations.
For more details, see .
No
remove_after_secs=<secs>
The time in seconds without connectivity, after which the client is removed from the cluster.
Minimum value: 60 seconds.
3600 seconds = 1 hour.
3600
Yes
traces_capacity_mb=<size-in-mb>
Traces capacity limit in MB.
Minimum value: 512 MB.
No
reserve_1g_hugepages=<true or false>
Controls the page allocation algorithm to reserve hugepages.
Possible values:
true: reserves 1 GB
false: reserves 2 MB
true
Yes
readahead_kb=<readahead>
The readahead size in KB per mount. A higher readahead is better for sequential reads of large files.
32768
Yes
auth_token_path
The path to the mount authentication token (per mount).
~/.weka/auth-token.json
No
dedicated_mode
Determine whether DPDK networking dedicates a core (full) or not (none). none can only be set when the NIC driver supports it. See .
This option is relevant when using DPDK networking (net=udp is not set).
Possible values: full or none
full
No
qos_preferred_throughput_mbps
Preferred requests rate for QoS in megabytes per second.
0 (unlimited)
Yes
qos_max_throughput_mbps
Maximum requests rate for QoS in megabytes per second. This option allows bursting above the specified limit but aims to keep this limit on average. The cluster admin can set the default value. See .
0 (unlimited)
Yes
qos_max_ops
Maximum number of IO operations a client can perform per second. Set a limit to a client or clients to prevent starvation from the rest of the clients. (Do not set this option for mounting from a backend.)
0 (unlimited)
Yes
connect_timeout_secs
The timeout, in seconds, for establishing a connection to a single server.
10
Yes
response_timeout_secs
The timeout, in seconds, waiting for the response from a single server.
60
Yes
join_timeout_secs
The timeout, in seconds, for the client container to join the Weka cluster.
360
Yes
dpdk_base_memory_mb
The base memory in MB to allocate for DPDK. Set this option when mounting to a WEKA cluster on GCP.
Example: -o dpdk_base_memory_mb=16
0
Yes
weka_version
The WEKA client version to run.
The cluster version
No
my_fs/mnt/weka.mount -t wekafs -o num_cores=0 -o net=udp backend-server-0/my_fs /mnt/weka
Running this command uses UDP mode (usually selected when the use of DPDK is not available).
UDP mode
Stateless client: Run the following commands, replacing <backend-1>, <backend-2>, and <netdevice> with appropriate values:
Persistent client: Run the following commands:
Restart the autofs service: Apply the changes by restarting the autofs service:
Ensure autofs starts automatically on reboot: Verify that autofs is configured to start on reboot:
If the output is enabled, no further action is required.
For Amazon Linux: Use chkconfig to confirm autofs is enabled for the current runlevel:
Ensure the output indicates on for the active runlevel.
Example output:
Access the WEKA filesystem: Navigate to the mount point to access the WEKA filesystem. Replace <fs-name> with the desired filesystem name:
options
See Additional Mount Options below.
backend
IP/hostname of a backend container. Mandatory.
fs
Filesystem name. Mandatory.
mount-point
Path to mount on the local server. Mandatory.
readcache
Set the mount mode to read from the cache. This action automatically turns off the writecache.
Note: The SMB share mount mode is always readcache. Set this option to Yes.
No
Yes
writecache
Set the mount mode to write to the cache.
Yes
Yes
forcedirect
memory_mb=<memory_mb>
The memory size in MiB the client can use for hugepages.
1400
Yes
num_cores=<frontend-cores>
The number of frontend cores to allocate for the client.
You can specify <num_cores> or<core> but not both.
If none are specified, the client is configured with 1 core.
If you specify 0 then you must use net=udp.
1
No
core=<core-id>
qos_max_throughput
Sets the default value for the qos_max_throughput_mbps option, which is the max requests rate for QoS in megabytes per second.
qos_preferred_throughput
Sets the default value for the qos_preferred_throughput_mbps option, which is the preferred requests rate for QoS in megabytes per second.
Backend servers/my_fs
Comma-separated list of backend servers with the filesystem name.
Mount point
If mounting multiple clusters, specify a unique name.
For two client containers, set container_name=client1 and container_name=client2.
Filesystem type
Must be wekafs.
Systemd mount options
x-systemd.after=weka-agent.service
x-systemd.mount-timeout=infinity
_netdev
Adjust the mount-timeout to your preference, for example, 180 seconds.
Mount options
Set the mount mode to directly read from and write to storage, avoiding the cache. This action automatically turns off both the writecache and readcache.
Note: Enabling this option could impact performance. Use it carefully. If you’re unsure, contact the . Do not use this option for SMB shares.
Specify explicit cores to be used by the WekaFS client. Multiple cores can be specified. Core 0 is not allowed.
echo "/mnt/weka /etc/auto.wekafs -fstype=wekafs,num_cores=1,net=<netdevice>" > /etc/auto.master.d/wekafs.autofs
echo "* <backend-1>,<backend-2>/&" > /etc/auto.wekafsecho "/mnt/weka /etc/auto.wekafs -fstype=wekafs" > /etc/auto.master.d/wekafs.autofs
echo "* &" > /etc/auto.wekafsservice autofs restartsystemctl is-enabled autofschkconfig | grep autofsautofs 0:off 1:off 2:off 3:on 4:on 5:on 6:offcd /mnt/weka/<fs-name>mkdir -p /mnt/weka/demo
mount -t wekafs demo /mnt/weka/demomount -t wekafs [-o option[,option]...] <fs-name> <mount-point>curl http://1.2.3.4:14000/dist/v1/install | shmount -t wekafs -o <options> <backend0>[,<backend1>,...,<backendN>]/<fs> <mount-point>
mount -t wekafs -o <options> <backend0>[,<backend1>,...,<backendN>]:/<fs> <mount-point>weka cluster mount-defaults set [--qos-max-throughput qos-max-throughput] [--qos-preferred-throughput qos-preferred-throughput]mount -t wekafs -o num_cores=2 -o net=intel0/192.168.1.100+192.168.1.101/24/192.168.1.254 backend1/my_fs /mnt/wekamount -t wekafs -o num_cores=2 -o net=mlnx0 -o net=mlnx1 backend1/my_fs /mnt/wekamount -t wekafs -o num_cores=4 -o net:ha=mlnx0,net:ha=mlnx1 backend1/my_fs -o mgmt_ip=10.0.0.1+10.0.0.2 /mnt/wekamount -t wekafs -o num_cores=2 -o net:s2=mlnx0,net:s1=mlnx1 backend1/my_fs /mnt/wekamount -t wekafs -o num_cores=2 -o net:s2+1=mlnx0,net:s1-2=mlnx1 backend1/my_fs -o mgmt_ip=10.0.0.1+10.0.0.2 /mnt/wekamount -t wekafs -o net=udp backend-server-0/my_fs /mnt/weka[Unit]
Description=WEKA Agent Service
Wants=network.target network-online.target
After=network.target network-online.target rpcbind.service
Documentation=http://docs.weka.io
Before=remote-fs-pre.target remote-fs.target
SourcePath=/etc/init.d/weka-agent
[Service]
Type=forking
Restart=always
WorkingDirectory=/
EnvironmentFile=/etc/environment
IgnoreSIGPIPE=no
KillMode=process
GuessMainPID=yes
SuccessExitStatus=5 6
ExecStart=/etc/init.d/weka-agent start
ExecStop=/etc/init.d/weka-agent stop
ExecReload=/etc/init.d/weka-agent reload
CPUAffinity=
Delegate=yes
[Install]
RequiredBy=remote-fs-pre.target remote-fs.target systemctl daemon-reload
systemctl enable --now weka-agent.service mkdir -p /mnt/weka/my_fs <backend servers/my_fs> <mount point> <filesystem type> <mount options> <systemd mount options> 0 0 backend-0,backend-1,backend-3/my_fs /mnt/weka/my_fs wekafs num_cores=1,net=eth1,x-systemd.after=weka-agent.service,x-systemd.mount-timeout=infinity,_netdev 0 0 mount /mnt/weka/my_fs yum install -y autofsapt-get install -y autofsThis page describes the prerequisites and compatibility for the installation of the WEKA system.
Important: The versions mentioned on the prerequisites and compatibility page are applicable to the WEKA system's latest minor version (4.2.X). For information on new features and supported prerequisites released with each minor version, refer to the relevant release notes available at get.weka.io.
Check the release notes for details about any updates or changes accompanying the latest releases.
2013 Intel® Core™ processor family (formerly Haswell) and later (dual-socket)
AMD EPYC™ processor families 2nd (Rome), 3rd (Milan-X), and 4th (Genoa) Generations (Backends: single-socket; Clients: single-socket and dual-socket)
Intel processor families SandyBridge (2011) and IvyBridge (2012) have been deprecated, and support for these processors will be discontinued in version 4.3.
Sufficient memory to support the WEKA system needs as described in .
More memory support for the OS kernel or any other application.
RHEL:
9.4, 9.3, 9.2, 9.1, 9.0
8.10, 8.9, 8.8, 8.7, 8.6, 8.5, 8.4, 8.3, 8.2, 8.1, 8.0
WEKA installation directory: /opt/weka
/opt/weka must be a direct path. Do not use a symbolic link (symlink).
Boot drive minimum requirements:
Adhere to the following considerations when choosing the adapters:
LACP: LACP is supported when bonding ports from dual-port Mellanox NICs into a single Mellanox device but is not compatible when using Virtual Functions (VFs).
Intel E810:
The following table provides the supported network adapters along with their supported features for backends and clients, and clients-only.
For more information about the supported features, see .
The following network adapters support Ethernet and SR-IOV VF for clients only:
Intel X540
Intel X550-T1
Intel X710
Intel X710-DA2
Avoid using the Intel X550-T1 adapter in a single client connected to multiple clusters.
Supported Mellanox OFED versions for the Ethernet NICs:
23.10-0.5.5.0
23.04-1.1.3.0
WEKA supports the following Mellanox OFED versions for the InfiniBand adapters:
23.10-0.5.5.0
23.04-1.1.3.0
5.9-0.5.6.0
When configuring firewall ingress and egress rules, the following access must be allowed.
Right-scroll the table to view all columns.
See .
The SSDs must support PLP (Power Loss Protection).
WEKA system storage must be dedicated, and partitioning is not supported.
The supported drive capacity is up to 30 TB.
IOMMU mode is not supported for SSD drives. If you need to configure IOMMU on WEKA cluster servers, for instance, due to specific applications when running the WEKA cluster in converged mode, contact our
API must be S3 compatible:
GET
Including byte-range support with expected performance gain when fetching partial objects
Amazon S3
S3 Standard
S3 Intelligent-Tiering
These storage classes are ideal for remote buckets where data is written once and accessed in critical situations, such as during disaster recovery:
Virtual Machines (VMs) can be used as clients only. Ensure the following prerequisites are met for the relevant client type:
To avoid irregularities, crashes, and inability to handle application load, make sure there is no CPU starvation to the WEKA process by reserving the CPU in the virtual platform and dedicating a core to the WEKA client.
The root filesystem must handle a 3K IOPS load by the WEKA client.
To avoid irregularities, crashes, and inability to handle application load, make sure there is no CPU starvation to the WEKA process by reserving the CPU in the virtual platform and dedicating a core to the WEKA client.
For additional information and how-to articles, search the WEKA Knowledge Base in the or contact the .
(version 1.1.5 up to 1.14.x)
-compliant KMS (protocol version 1.2 and up)
The KMS must support encryption-as-a-service (KMIP encrypt/decrypt APIs)
Rocky Linux:
9.4, 9.3, 9.2, 9.1, 9.0
8.10, 8.9, 8.8, 8.7, 8.6
CentOS:
8.5, 8.4, 8.3, 8.2, 8.1, 8.0
7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.3, 7.2
Ubuntu:
24.04
22.04
20.04
18.04
Amazon Linux 2023 (AL2023) with x86 distribution
Amazon Linux 2 LTS (formerly Amazon Linux 2 LTS 17.12) with x86_64 distribution
Amazon Linux:
AMI 2018.03
AMI 2017.09
RHEL:
9.4, 9.3, 9.2, 9.1, 9.0
8.10, 8.9, 8.8, 8.7, 8.6, 8.5, 8.4, 8.3, 8.2, 8.1, 8.0
7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.3, 7.2
Rocky Linux:
9.4, 9.3, 9.2, 9.1, 9.0
8.10, 8.9, 8.8, 8.7, 8.6
CentOS:
8.5, 8.4, 8.3, 8.2, 8.1, 8.0
7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.3, 7.2
Ubuntu:
24.04
22.04
20.04
Amazon Linux 2023 (AL2023) with x86 distribution
Amazon Linux 2 LTS (formerly Amazon Linux 2 LTS 17.12) with x86 distribution
Amazon Linux:
AMI 2018.03
AMI 2017.09
SLES:
15 SP6
15 SP5
15 SP4
Oracle Linux:
9
Debian:
12
AlmaLinux OS:
9.4
8.10
The following kernel versions are supported:
6.8
6.0 to 6.5
5.3 to 5.19
4.4.0-1106 to 4.19
3.10
All WEKA servers must be synchronized in date/time (NTP recommended)
A watchdog driver should be installed in /dev/watchdog (hardware watchdog recommended); search the WEKA knowledge base in the WEKA support portal for more information and how-to articles.
If using mlocate or alike, it's advisable to exclude wekafs from updatedb filesystems lists; search the WEKA knowledge base in the for more information and how-to articles.
SELinux is supported in both permissive and enforcing modes.
The targeted policy is supported.
The mls
WEKA backends and clients that serve protocols must be deployed on a supported OS with cgroups V1 (legacy).
Capacity: NVMe SSD with 960 GB capacity
Durability: 1 DWPD (Drive Writes Per Day)
Write throughput: 1 GB/s
Boot drive considerations:
Do not share the boot drive.
Do not mount using NFS.
Do not use a RAM drive remotely.
If two boot drives are available:
It is recommended to dedicate one boot drive for the OS and the other for the /opt/weka directory.
Do not use software RAID to have two boot drives.
Software required space:
Ensure that at least 26 GB is available for the WEKA system installation.
Allocate an additional 10 GB per core used by WEKA.
Filesystem requirement:
Set a separate filesystem on a separate partition for /opt/weka.
The ice Linux Base Driver version 1.9.11 and firmware version 4.0.0 are required.
MTU: It is recommended to set the MTU to at least 4k on the NICs of WEKA cluster servers and the connected switches.
Jumbo Frames: If any network connection, irrespective of whether it’s InfiniBand or Ethernet, on a given backend possess the capability to transmit frames exceeding 4 KB in size, it is mandatory for all network connections used directly by WEKA on that same backend to have the ability to transmit frames of at least 4 KB.
IOMMU support: WEKA automatically detects and enables IOMMU for the server and PCI devices. Manual enablement is not required.
Single IP Single IP (also known as shared networking) allows a single IP address to be assigned to the Physical Function (PF) and shared across multiple Virtual Functions (VFs). This means that a single IP can be shared by every WEKA process on that server, while still being available to the host operating system.
Mixed networks: A mixed network configuration refers to a setup where a WEKA cluster connects to both InfiniBand and Ethernet networks. RDMA is not supported in mixed networks.
IP Addressing for dataplane NICs: Exclusively use static IP addressing. DHCP is not supported for dataplane NICs.
WEKA peer connectivity requires NAT-free networking WEKA requires visibility and connectivity to all peers, without interference from networking technologies like Network Address Translation (NAT).
NVIDIA Mellanox CX-7 single port
InfiniBand
Single IP
rx interrupts
RDMA
HA
LACP
Mixed networks
SR-IOV VF
Routed network
NVIDIA Mellanox CX-7 dual port
InfiniBand
Single IP
rx interrupts
RDMA
HA
LACP
Mixed networks
SR-IOV VF
Routed network
NVIDIA Mellanox CX-7-ETH single port
Ethernet
Single IP
HA
Routed network (ETH only)
IOMMU
LACP
Mixed networks
SR-IOV VF
RX interrupts
NVIDIA Mellanox CX-7-ETH dual port
Ethernet
LACP
Single IP
HA
Routed network (ETH only)
Mixed networks
SR-IOV VF
RX interrupts
PKEY
NVIDIA Mellanox CX-6 LX
Ethernet
Single IP
rx interrupts
HA
Routed network (ETH only)
LACP
Mixed networks
SR-IOV VF
PKEY
NVIDIA Mellanox CX-6 DX
Ethernet
LACP
Single IP
rx interrupts
RDMA
Mixed networks
SR-IOV VF
PKEY
NVIDIA Mellanox CX-6
Ethernet InfiniBand
Mixed networks
Single IP
rx interrupts
RDMA (IB only)
Routed network
LACP
SR-IOV VF
PKEY
NVIDIA Mellanox CX-5 EX
Ethernet InfiniBand
Mixed networks
RDMA (IB only)
HA
PKEY (IB only)
Single IP
Routed network
LACP
SR-IOV VF
NVIDIA Mellanox CX-5 BF
Ethernet
Mixed networks
HA
IOMMU
Single IP
Routed network
RDMA
LACP
NVIDIA Mellanox CX-5
Ethernet InfiniBand
Mixed networks
rx interrupts
RDMA (IB only)
HA
Single IP
RDMA (ETH)
LACP
SR-IOV VF
NVIDIA Mellanox CX-4 LX
Ethernet InfiniBand
Mixed networks
rx interrupts
HA
Routed network (ETH only)
Single IP
RDMA
LACP
SR-IOV VF
NVIDIA Mellanox CX-4
Ethernet InfiniBand
Mixed networks
rx interrupts
HA
Routed network (ETH only)
Single IP
RDMA
LACP
SR-IOV VF
Intel XL710-Q2
Intel XXV710
Intel 82599ES
Intel 82599
5.8-3.0.7.0
5.8-1.1.2.1 LTS
5.7-1.0.2.0
5.6-2.0.9.0
5.6-1.0.3.3
5.4-3.5.8.0 LTS
5.4-3.4.0.0 LTS
5.1-2.6.2.0
5.1-2.5.8.0
Note: Subsequent OFED minor versions are expected to be compatible with Nvidia hardware due to Nvidia's commitment to backwards compatibility.
Supported ENA drivers:
1.0.2 - 2.0.2
A current driver from an official OS repository is recommended
Supported ixgbevf drivers:
3.2.2 - 4.1.2
A current driver from an official OS repository is recommended
Supported Intel 40 drivers:
3.0.1-k - 4.1.0
A current driver from an official OS repository is recommended
Supported ice drivers:
1.9.11
Supported Broadcom drivers:
228
Ethernet speeds:
400 GbE / 200 GbE / 100 GbE / 50GbE / 40 GbE / 25 GbE / 10 GbE.
NICs bonding:
Supports bonding dual ports on the same NVIDIA Mellanox NIC using mode 4 (LACP) to enhance redundancy and performance.
IEEE 802.1Q VLAN encapsulation:
Tagged VLANs are not supported.
VXLAN:
Virtual Extensible LANs are not supported.
DPDK backends and clients using NICs supporting shared networking (single IP):
Require one IP address per client for both management and data plane.
SR-IOV enabled is not required.
DPDK backends clients using NICs supporting non-shared networking:
IP address for management: One per NIC (configured before WEKA installation).
IP address for data plane: One per in each server (applied during cluster initialization).
UDP clients:
Use a single IP address for all purposes.
5.8-3.0.7.0
5.8-1.1.2.1 LTS
5.7-1.0.2.0
5.6-2.0.9.0
5.6-1.0.3.3
5.4-3.5.8.0 LTS
5.4-3.4.0.0 LTS
5.1-2.6.2.0
5.1-2.5.8.0
Note: Subsequent OFED minor versions are expected to be compatible with Nvidia hardware due to Nvidia's commitment to backwards compatibility.
WEKA supports the following InfiniBand configurations:
InfiniBand speeds: Determined by the InfiniBand adapter supported speeds (FDR / EDR / HDR / NDR).
Subnet manager: Configured to 4092.
One WEKA system IP address for management and data plane.
PKEYs: One partition key is supported by WEKA.
Redundant InfiniBand ports can be used for both HA and higher bandwidth.
All WEKA backend IPs
14000-14100 (drives) 14300-14400 (compute)
TCP and UDP TCP and UDP
These ports are the default. You can customize the ports.
WEKA backend to client traffic
All WEKA backend IPs
Client host IPs
14000-14100 (frontend)
TCP and UDP
These ports are the default. You can customize the ports.
WEKA SSH management traffic
All WEKA backend IPs
All WEKA backend IPs
22
TCP
WEKA server traffic for cloud deployments
All WEKA backend IPs
All WEKA backend IPs
14000-14100 (drives)
15000-15100 (compute)
16000-16100 (frontend)
TCP and UDP TCP and UDP TCP and UDP
These ports are the default. You can customize the ports.
WEKA client traffic (on cloud)
Client host IPs
All WEKA backend IPs
14000-14100 (drives)
15000-15100 (compute)
TCP and UDP TCP and UDP
These ports are the default. You can customize the ports.
WEKA backend to client traffic (on cloud)
All WEKA backend IPs
Client host IPs
14000-14100 (frontend)
TCP and UDP
These ports are the default. You can customize the ports.
WEKA GUI access
All WEKA management IPs
14000
TCP
User web browser IP
NFS
NFS client IPs
WEKA NFS backend IPs
2049 <mountd port>
TCP and UDP TCP and UDP
You can set the mountd port using the command: weka nfs global-config set --mountd-port
SMB/SMB-W
SMB client IPs
WEKA SMB backend IPs
139 445
TCP TCP
SMB-W
WEKA SMB backend IPs
2224
TCP
This port is required for internal clustering processes.
SMB/SMB-W
WEKA SMB backend IPs
All Domain Controllers for the selected Active Directory Domain
88
389 464 636 3268 3269
TCP and UDP TCP and UDP TCP and UDP TCP and UDP TCP and UDP TCP and UDP
These ports are required for SMB/SMB-W to use Active Directory as the identity source. Furthermore, every Domain Controller within the selected AD domain must be accessible from the WEKA SMB servers.
SMB/SMB-W
WEKA SMB backend IPs
DNS servers
53
TCP and UDP
S3
S3 client IPs
WEKA S3 backend IPs
9000
TCP
This port is the default. You can customize the port.
wekatester
All WEKA backend IPs
All WEKA backend IPs
8501 9090
TCP TCP
Port 8501 is used by wekanetperf.
WEKA Management Station
User web browser IP
WEKA Management Station IP
80 <LWH>
443 <LWH>
3000 <mon>
7860 <admin UI>
8760 <deploy>
8090 <snap>
8501 <mgmt> 9090 <mgmt>
9091 <mon> 9093 <alerts>
HTTP
HTTPS
TCP
TCP
TCP
TCP TCP
TCP TCP
Cloud WEKA Home, Local WEKA Home
All WEKA backend IPs
Cloud WEKA Home or Local WEKA Home
80 443
HTTP HTTPS
Open according to the directions in the deployment scenario: - WEKA server IPs to CWH or LWH. - LWH to CWH (if forwarding data from LWH to CWH)
Troubleshooting by the Customer Success Team (CST)
All WEKA backend IPs
CST remote access
4000 4001
TCP TCP
Supports any byte size of up to 65 MiB
DELETE
Data Consistency: Amazon S3 consistency model:
GET after a single PUT is strongly consistent
Multiple PUTs are eventually consistent
S3 Standard-IA
S3 One Zone-IA
S3 Glacier Instant Retrieval
Remember, retrieval times, minimum storage periods, and potential charges due to object compaction may apply. If unsure, use S3 Intelligent-Tiering.
Azure Blob Storage
Google Cloud Storage (GCS)
Cloudian HyperStore (version 7.3)
Dell EMC ECS (v3.5 and higher)
Dell PowerScale S3 (version 9.8.0.0)
HCP Classic V9.2 and up (with versioned buckets only)
HCP for Cloud-Scale V2.x
IBM Cloud Object Storage System (version 3.14.7)
Lenovo MagnaScale (version 3.0)
Quantum ActiveScale (version 5.5.1)
Red Hat Ceph Storage (version 5.0)
Scality Ring (version 7.4.4.8)
Scality Artesca (version 1.5.2)
SwiftStack (version 6.30)
Spectra Logic BlackPearl with Vail for remote buckets (version 5.7.1)
WEKA S3
The root filesystem must handle a 3K IOPS load by the WEKA client.
The virtual platform interoperability, such as a hypervisor, NICs, CPUs, and different versions, must support DPDK and virtual network driver.
Using vmxnet3 is only supported with core dedication.
Amazon ENA
Ethernet
SR-IOV VF
Single IP
HA
Routed network
LACP
Mixed networks
RX interrupts
RDMA
PKEY
IOMMU
Broadcom BCM957508-P2100G dual port
Ethernet
Single IP
SR-IOV VF
HA
Routed network
LACP
Mixed networks
RX interrupts
RDMA
PKEY
IOMMU
Intel E810 2CQDA2
Ethernet
Single IP
HA
Routed network
WEKA server traffic for bare-metal deployments
All WEKA backend IPs
All WEKA backend IPs
14000-14100 (drives) 14200-14300 (frontend) 14300-14400 (compute)
TCP and UDP TCP and UDP TCP and UDP
These ports are the default for the Resources Generator for the first three containers. You can customize the ports.
WEKA client traffic
LACP
Mixed networks
RX interrupts
RDMA
Client host IPs
18.04
15 SP2
12 SP5
For clarity, the range of supported versions is inclusive.
Ensure the device supports a maximum number of VFs greater than the number of physical cores on the server.
Set the number of VFs to match the cores you intend to dedicate to WEKA.
Note that some BIOS configurations may be necessary.
SR-IOV: Enabled in BIOS.
PKEY
SR-IOV VF
IOMMU
IOMMU
IOMMU
PKEY
IOMMU
IOMMU
Routed network (ETH only)
IOMMU
HA
IOMMU
IOMMU
RX interrupts
PKEY (ETH)
SR-IOV VF
RX interrupts
PKEY
PKEY (IB only)
Routed network (ETH only)
IOMMU
Routed network (IB)
PKEY (ETH)
IOMMU
PKEY
IOMMU
PKEY
To guide customers, partners, WEKA teams (sales, customer success, etc.) through the step by step process of deploying the WEKA data platform in AWS using Terraform.
Deploying WEKA in AWS requires knowledge of several technologies, specifically AWS, Terraform (infrastructure-as-code provisioning manager), basic Linux operations, and WEKA software. Understanding that not everyone tasked with deploying WEKA in AWS will have experience in each required domain, this document seeks to provide an end-to-end instruction set that allows its reader to successfully deploy a working WEKA cluster in AWS with minimal knowledge or prerequisites.
This document focuses on deploying WEKA in an AWS environment using Terraform for a POC or Production environment. While no pre-created AWS elements are needed beyond an appropriate user account, this guide will showcase using some pre-created elements in the deployment.
The reader will be guided through general AWS requirements, the AWS networking requirements needed to support WEKA, using Terraform to deploy WEKA, and verifying a successful WEKA deployment.
HashiCorp Terraform is a tool that allows you to define, provision, and manage infrastructure as code. Instead of manually setting up servers, databases, networks, and other infrastructure components, you describe what you want in a configuration file using a declarative configuration language, HashiCorp Configuration Language (HCL), or optionally JSON. Once the desired infrastructure configuration is described in this file, Terraform can automatically create, modify, or delete resources to match the file specifications. This ensures that the infrastructure is consistently and predictably deployed.
This document describes the WEKA Data Platform automated deployment in AWS using Terraform. Our choice of Terraform was influenced by its widespread consumer adoption and ubiquity in the Infrastructure as Code (IaC) space. It is commonly embraced by organizations globally, large and small, to deploy stateful infrastructure on-premises and in public clouds such as AWS, Azure, or Google Cloud Platform.
Please note that , allowing customers to select their preferred deployment method.
To install Terraform, we recommend following the published by HashiCorp.
Proceed with the following steps to locate the appropriate AWS Account.
Navigate to the AWS Management Console. In the top right corner, search for “Account ID.”
To carry out the necessary operations for a successful WEKA deployment in AWS using Terraform, you must ensure that an has the appropriate permissions listed in Appendix B below (i.e., the Appendices section). The IAM user must be permitted to create, modify, and delete AWS resources as dictated by Terraform Configuration Files used for WEKA deployment.
If the current IAM user does not have these permissions, it is advisable to either update the permissions or create a new IAM user with the required privileges.
Follow the steps below to verify IAM user privileges.
Navigate to the AWS Management Console.
Log in using the account that will be used for the entirety of the WEKA deployment.
In the AWS Management Console, go to the Services menu and select “IAM” to access the Identity and Access Management dashboard.
Within the IAM dashboard, search for the IAM user in question or navigate to the “Users” section.
Click on the user's name to view their permissions. You will need to verify that the user has policies attached that grant the necessary permissions for managing AWS resources via Terraform.
For successful WEKA deployment in AWS using Terraform, it is essential to ensure your AWS account has the appropriate quotas for the needed AWS resources. Specifically, when setting up EC2 instances like the i3en for the WEKA backend cluster, AWS requires you to manage quotas based on the vCPU count for each EC2 instance type or family.
Before WEKA deployment, please confirm if your EC2 VM’s vCPU sizing requirements can be met within the limits of existing quota. If not, please increase the quotas in your AWS account before executing Terraform commands (outlined later in the document). The required minimum quota is the cumulative vCPU count for all instances (for example, 10 i3en.6xlarge are each 24 vCPUs, so 240 vCPU count would be needed just for the cluster alone.) that will be deployed. This will prevent failures during the execution of terraform commands, which are discussed in subsequent sections.
Navigate to the AWS Management Console () and use the search bar to find the AWS Service called "Service Quotas."
Once on the Service Quotas page, choose "Amazon EC2."
WEKA currently only supports i3en instance types for backend cluster nodes. There are instance types of Spot, OnDemand, and Dedicated. Be sure you are adjusting the proper one.
Select the “Standard (A,C,D,H,I,M,R,T,Z)” instance type, then click on "Request quota increase."
Fill out the form in the "Request quota increase" section by specifying the number of vCPUs you require. For example, if you need 150 vCPUs for the i3en instance family, enter this number and submit your request.
Quota increase requests are often processed immediately. If the request involves a substantial number of vCPUs or a specialized instance type, manual review by AWS support may be required.
Ensure that you have requested and confirmed the necessary quotas for all instance types that will be used for the WEKA backend servers deployment, and any associated application servers running the WEKA client software. As indicated in the , WEKA supports the i3en series instances for WEKA backend servers. Check the documentation for details on the available sizes, and corresponding vCPU requirements for these instances found here - .
The WEKA deployment uses various aspects of AWS, including, but not limited to, VPCs, Subnets, Security Groups, End Points, and more. These items can either be created by the Terraform process or can be pre-existing if creating elements to use manually. The minimum will be a VPC(Virtual Private Cloud), two Subnets each in a different AZ(Availability Zone), and a Security Group.
If you don’t have the Terraform deployment auto-create networking components, The recommended VPC will have two subnets (either private or public) in separate AZs with the subnet for WEKA to have access to the internet, either via an IGW with an EIP, NAT, proxy or an egress VPC. While WEKA deployment is not multi-AZ, it is still required to spin up a minimum of two subnets in different AZs for the ALB.
An AWS Network Access Lists (ACL’s) function as basic firewalls, governing inbound and outbound network traffic based on security rules. They apply to network interfaces (NICs), EC2 instances, and subnets.
Every ACL starts with default rules that ensure basic connectivity. For example, there's a default rule that allows outbound communication from all AWS resources and another default rule that denies all inbound traffic from the internet. These rules have high priority numbers, so custom rules can easily override them. Most restrictions and allowances are handled by Security Groups, which will be set up in the next step.
You can see your Network Access Lists for the VPC by selecting the “Main network ACL” from the VPC details page for your VPC.
From the ACL page you can view the Inbound and Outbound rules.
To manually create security groups, please refer to Appendix A – Security Groups / Network ACL Ports on the Appendices section and ensure you have defined all the relevant ports.
If using existing elements gather their AWS IDs as exampled below.
These are modules for creating IAM roles, networks, and security groups necessary for Weka deployment. If you don't provide specific IDs for security groups or subnets, the modules will create them for you. The availability_zones variable is required when creating a network and is currently limited to a single subnet. The following will be auto created if not supplied.
Private Network Deployment: To deploy a private network with NAT, you need to set certain variables, such as subnet_autocreate_as_private to true and provide a private CIDR range. To ensure instances do not get public IPs, set assign_public_ip to false.
SSH Keys: For SSH access, use the username ec2-user. You can either provide an existing key pair name or a public SSH key. If you don't provide either, the system will create a key pair and store the private key locally.
Application Load Balancer (ALB): If you want to create an ALB for the backend UI and WEKA client connections, you must set
The WEKA user token provides access to the WEKA binaries, and is used to access https:// during the installation process.
To find the user’s token, follow the instructions below.
In a web browser, navigate to and select the user’s name in the upper righthand corner of the page.
From the column on the lefthand side of the page, select “API Tokens.” The user’s API token is presented on the screen. The API token will be used later in the install process.
The module is for deploying various AWS resources, including EC2 instances, DynamoDB tables, Lambda functions, State Machines, etc., for WEKA deployment on AWS.
Here's how you would structure the Terraform files:
Create a directory for the Terraform configuration files.
Navigate to the directory.
A main.tf file is needed to define the Terraform options. Create the main.tf file with your prefered editor
Open the main.tf in your preferred editor.
Create the contents of the main.tf with the following:
Authentication is handled in the “provider” section. You will need either the “Access key ID” and the “Secret access key” for the AWS account’s IAM user that will be authenticated in AWS for WEKA deployment or have AWS CLI configured which still requires the Access key ID and the Secret access key, but only to authenticate once. If the AWS IAM user doesn’t already have both the “Access key ID” and “Secret access key”, instructions on how to create both can be found here.
Authentication can be accomplished by editing the provider section to one of the following.
Option 1 is to hard code your access and secret key as seen here.
Option 2 is to only have to provide the region you will authenticate into and use the AWS CLI for authentication
To authenticate AWS CLI you use the following command
Fill in the required information and hit enter.
Once the authentication method is decided uncomment and fill in any extra information you will use.
Initialize the Terraform directory:
After creating and saving the file, in the same directory as the main.tf file run the following command.
This will ensure that the proper Terraform resource files for AWS are downloaded and available for the system.
Run Terraform Plan & Apply:
Best practice before applying or destroying a Terraform configuration file is to run a plan using following command.
Initiate the deployment of WEKA in AWS by running the following command.
This command executes the creation of AWS resources necessary to run WEKA. Confirm deployment of resources by typing yes at the prompt.
When the Terraform AWS resource deployment process successfully completes, an output similar to below will be shown. If it is unsuccessful a failed error message would instead be shown.
Please take note of “alb_dns_name”, “local_ssh_private_key” and “ssh_user” so that you would be able to use it later while connecting through SSH to the machine.
There are also three AWS CLI commands that can return useful information.
Database: DynamoDB table for storing Weka cluster state.
EC2: Launch templates for auto-scaling groups and individual instances.
Networking: Placement Group, Auto Scaling Group, and optional ALB for UI and Backends.
deploy: Provides installation scripts for new machines.
clusterize: Provides the script for clusterization.
clusterize-finalization: Updates the cluster state upon the completion of clusterization.
The Terraform deployment also makes it easy to deploy additional instances to act as protocol nodes for NFS or SMB. These instances are in addition to the number of instances defined for the the WEKA backend cluster count. e
To deploy protocol nodes, additional information needs to be added to the main.tf
The simplest method is to just define how many protocol nodes of each type to deploy and allow the defaults to be used for everything else.
Add the following to the before the last ‘}’ of the file.
To gather your WEKA cluster IPs, go to the EC2 page in AWS and select “Instances (running)”
The instances for the WEKA backend servers will be called -<cluster_name>-instance-backend.
To access and manage WEKA cluster, select any of the WEKA backend instance and note the IP address as shown below.
If your subnet provided a public IP address for the instance (If the EC2 was configured as so), that will be listed. All the interface IP addresses that WEKA will use for communication will all be “private IPv4 addresses”. You can get the primary private address by looking at the “Hostname type” and noting the IP address from there.
The password for the WEKA cluster will be in the AWS Secret Manager. You can either run the AWS Secretmanager command from the Terraform output to gather the cluster password, or use the AWS console.
From the AWS console search for “secret manager” and then select “Secrets” from the “Secrets Manager” section.
Click on the secret that contains the prefix and cluster_name of the deployment along with the word password.
Click on “Retrieve secret value”.
The randomly generated password that was assigned to WEKA user ‘admin’ will be displayed.
The WEKA cluster backend instances can be accessed via SSH. If the WEKA backend instances do not have public IP addresses, a system that can reach the subnet they are in will be needed.
To access an instance from the system that ran the terraform deployment, use the IP address collected in step 4.5 and the ssh key path in the output of step 4.4.
Using a jump box with a GUI deployed into the same VPC and subnet as the WEKA cluster, the WEKA GUI can be accessed via a web browser.
In the examples below, a Windows 10 instance with a public IP address was deployed in the same VPC, subnet, and security group as the WEKA cluster. Network security group rules were added to allow RDP access explicitly to the Windows 10 system.
Open a browser in the Windows 10 jump box and visit https://:14000. The WEKA GUI login screen should appear. Login as user ‘admin’ and the password gathered in 4.5.
View the cluster GUI home screen.
Review the cluster backends.
Review the clients, if any, attached to the cluster.
Review the file systems.
Scaling (both out and in) the WEKA backend cluster can be easily done through the AWS AutoScaling Group Policy created by Terraform.
The Terraform-created lambda functions will be activated when a new instance is initiated or retired. These functions will then execute the required automation processes to add more computing resources (i.e., a new backend) to the cluster.
To scale out from the minimum of 6 nodes, go to the AutoScaling Group page in the AWS console and change the desired capacity from its current number to the desired cluster size (e.g. 10 servers) by choosing “Edit” :
And changing the desired capacity (in our example below, we’ve set it to “10”) :
Additionally, the auto-scaling provides the following advantages:
Integration with ALB:
Auto Scaling Groups seamlessly integrate with an Application Load Balancer (ALB) for efficient traffic distribution among multiple instances.
The ALB automatically identifies and routes traffic exclusively to healthy instances, relying on health check results from the associated Auto Scaling Group.
Decommissioning an old instance and allowing the Auto Scaling Group (ASG) to launch a new one involves terminating the existing instance and letting the ASG automatically replace it. Here's a brief guide:
Identify the Old Instance:
Identify the EC2 instance that you want to decommission. This could be based on age, outdated configurations, or other criteria.
Verify Auto Scaling Configuration:
See
This section provides examples of the permissions required to deploy WEKA using Terraform.
The minimum IAM policies needed are based on the assumption that the network, including VPC, subnets, VPC Endpoints, and Security Groups, is created by the end user. If IAM roles or policies are pre-established, some permissions may be omitted.
In each policy, replace the placeholders, such as account-number, prefix, and cluster-name, with the corresponding actual values.
Parameters:
DynamoDB: Full access is granted as your setup requires creating and managing DynamoDB tables.
Lambda: Full access is needed for managing various Lambda functions mentioned.
State Machine (AWS Step Functions): Full access is given for managing state machines.
Customization:
Resource Names and ARNs: Replace "Resource": "*" with specific ARNs for your resources to tighten security. Use specific ARNs for KMS keys as well.
Region and Account ID: Replace region and account-id with your AWS region and account ID.
Important Notes:
This is a broad policy for demonstration. It's recommended to refine it based on your actual resource usage and access patterns.
You may need to add or remove permissions based on specific requirements of your Terraform module and AWS environment.
Testing the policy in a controlled environment before applying it to production is advisable to ensure it meets your needs without overly restricting or exposing your resources.
The policies below are required for all the components to function on AWS. Terraform will create these policies as part of the automation. However, it is also important to note that you could create them by yourself and define them in your Terraform modules.
Object Storage (OBS): To integrate with S3 for tiered storage, set tiering_enable_obs_integration to true and provide the name of the S3 bucket. You can also specify the SSD cache percentage.
Clients (optional): For automatically mounting clients to the WEKA cluster, provide the number of clients you want to create. Optional variables include instance type, number of network interfaces (NICs), and AMI ID.
NFS Protocol Gateways (optional): Similar to clients, specify the number of NFS protocol gateways you want. You can also provide additional configuration details like instance type and disk size.
SMB Protocol Gateways (optional): For SMB protocol gateways, you must create at least three instances. Additional configuration details are similar to NFS gateways.
Secret Manager: This is used to store sensitive information like usernames and passwords. If you cannot provide a secret manager endpoint, you can disable it by setting secretmanager_use_vpc_endpoint to false.
VPC Endpoints: If you need VPC endpoints for services like EC2, S3, or a proxy, you can enable them by setting the respective variables to true.
Terraform Output: The output from running the Terraform module will include details such as the SSH username and WEKA password secret ID. It will also provide helper commands to learn about the clusterization process.
IAM: Roles and policies required for various WEKA components.
Secret Manager: Safely stores Weka credentials and tokens.
status: Offers information on cluster progress status.
State Machine Functions: Manages various stages like fetching cluster information, scaling down, terminating, etc.
fetch: Retrieves cluster or autoscaling group details and forwards them to the subsequent stage.
scale-down: Utilizes the information fetched to operate on the Weka cluster, such as deactivating drives or hosts. The function will error out if a non-supported target is provided, like scaling down to only two backend instances.
terminate: Ends the operations of the deactivated hosts.
transient: Manages and reports transient errors. For instance, it might report if certain hosts couldn't be deactivated, yet some were, and the entire operation continued.
In the event of an instance failing a health check, Auto Scaling promptly initiates the replacement process by launching a new instance to the WEKA cluster.
The new instance is incorporated into the service and added to the ALB's target group only after successfully passing health checks.
This systematic approach ensures uninterrupted availability and responsiveness of WEKA, mitigating the impact of instance failures.
Graceful Scaling:
Auto Scaling configurations can be fine-tuned to execute scaling actions gradually, preventing abrupt spikes in traffic or disruptions to the application.
This measured scaling approach aims to maintain a balanced and stable environment while effectively adapting to fluctuations in demand.
Ensure your Auto Scaling Group is configured with at least 7 instances or more. Confirm that the desired capacity of the ASG is set to maintain the desired number of instances in the cluster.
Terminate the Old Instance:
Manually terminate the old EC2 instance using the AWS Management Console, AWS CLI, or SDKs. This action triggers the Auto Scaling Group to take corrective measures.
Monitor Auto Scaling Activities:
Observe the Auto Scaling Group's activities in the AWS Console or use AWS CloudWatch to monitor events. Verify that the ASG detects the terminated instance and initiates the launch of a new instance.
Verify New Instance:
Once the new instance is launched, ensure that it passes the health checks and successfully joins the cluster. Confirm that the overall capacity of the cluster is maintained.
Check Load Balancer:
If your setup involves a load balancer, ensure it detects and registers the new instance. This step is crucial for maintaining proper load distribution across the cluster.
Review Auto Scaling Logs:
Check CloudWatch logs or Auto Scaling events for any issues or error messages related to the termination of the old instance and the launch of the new one.
Document and Monitor:
Document the decommissioning process and monitor the cluster to ensure it continues to operate smoothly with the new instance.
Application Load Balancer (ALB): Required for operations related to load balancing.
CloudWatch: Necessary for monitoring and managing CloudWatch rules and metrics.
Secrets Manager: Access for managing secrets in AWS Secrets Manager.
IAM: PassRole and GetRole are essential for allowing resources to assume specific roles.
KMS: Permissions for Key Management Service, assuming you use KMS for encryption.
key-id with the ID of the KMS key used in your setup.mkdir deploycd deploy# Terraform configuration for deploying resources in AWS
terraform {
required_version = ">= 1.4.6" # Minimum Terraform version required
# Define required providers and their versions
required_providers {
aws = {
source = "hashicorp/aws" # AWS provider source
version = ">= 5.5.0" # Minimum AWS provider version required
}
}
}
# AWS provider configuration
provider "aws" {
region = "us-east-1" # Desired AWS region
access_key = "xxxxxxxxxxxx" # AWS CLI access key
secret_key = "xxxxxxxxx" # AWS CLI secret key
}
# Module for WEKA deployment
module "weka_deployment" {
source = "weka/weka/aws" # Source registry for the module
version = "1.0.1" # Version of registry to use
prefix = "WEKA" # Prefix used for naming all AWS elements
cluster_name = "Prod" # Name of the cluster
availability_zones = ["us-east-1a"] # Availability zones for deployment
allow_ssh_cidrs = ["0.0.0.0/0"] # CIDR blocks allowed for SSH access
get_weka_io_token = "<Your WEKA IO token>" # Token for WEKA IO authentication
clients_number = 2 # Number of client instances to deploy
# Required environment variables for deploying in an existing environment. Comment out if you want Terraform to create everything
vpc_id = "YOUR_VPC_ID" # ID of the VPC to be used
subnet_ids = ["YOUR_SUBNET_ID"] # List of subnet IDs (primary subnet to deploy WEKA into)
create_alb = "false" # Flag to determine ALB creation
alb_additional_subnet_id = "YOUR_ADDITIONAL_ALB_SUBNET_ID" # Additional subnet ID for ALB (Secondary subnet in second AZ)
# Uncomment the following to manually specify additional options for the existing enviornment (optional)
# sg_ids = ["YOUR_SECURITY_GROUP_ID"] # Existing security group IDs
# instance_iam_profile_arn = "YOUR_INSTANCE_IAM_PROFILE_ARN" # IAM role for EC2 instances
# lambda_iam_role_arn = "YOUR_LAMBDA_IAM_ROLE_ARN" # IAM role for Lambda functions
# sfn_iam_role_arn = "YOUR_STATE_MACHINE_IAM_ROLE_ARN" # IAM role for state machines
# event_iam_role_arn = "YOUR_EVENT_IAM_ROLE_ARN" # IAM role for event management
}...
provider "aws" {
region = "us-east-1"
access_key = "<your access key ID here>"
secret_key = "<your access secret key here>"
}
......
provider "aws" {
region = "us-east-1"
}
...aws configureterraform initterraform planterraform applyOutputs:
weka_deployment = {
"alb_alias_record" = null
"alb_dns_name" = "internal-WEKA-Prod-lb-697001983.us-east-1.elb.amazonaws.com"
"asg_name" = "WEKA-Prod-autoscaling-group"
"client_ips" = null
"cluster_helper_commands" = <<-EOT
aws ec2 describe-instances --instance-ids $(aws autoscaling describe-auto-scaling-groups --auto-scaling-group-name WEKA-Prod-autoscaling-group --query "AutoScalingGroups[].Instances[].InstanceId" --output text) --query 'Reservations[].Instances[].PublicIpAddress' --output json
aws lambda invoke --function-name WEKA-Prod-status-lambda --payload '{"type": "progress"}' --cli-binary-format raw-in-base64-out /dev/stdout
aws secretsmanager get-secret-value --secret-id arn:aws:secretsmanager:us-east-1:459693375476:secret:weka/WEKA-Prod/weka-password-g9bH-T2og7D --query SecretString --output text
EOT
"cluster_name" = "Prod"
"ips_type" = "PublicIpAddress"
"lambda_status_name" = "WEKA-Prod-status-lambda"
"local_ssh_private_key" = "/tmp/WEKA-Prod-private-key.pem"
"nfs_protocol_gateways_ips" = tostring(null)
"smb_protocol_gateways_ips" = tostring(null)
"ssh_user" = "ec2-user"
"weka_cluster_password_secret_id" = "arn:aws:secretsmanager:us-east-1:459693375476:secret:weka/WEKA-Prod/weka-password-g9bH-T2og7D"
}
weka_deployment_output = {
"alb_alias_record" = null
**"alb_dns_name" = "internal-WEKA-Prod-lb-697001983.us-east-1.elb.amazonaws.com"**
"asg_name" = "WEKA-Prod-autoscaling-group"
"client_ips" = null
"cluster_helper_commands" = <<-EOT
**aws ec2 describe-instances --instance-ids $(aws autoscaling describe-auto-scaling-groups --auto-scaling-group-name WEKA-Prod-autoscaling-group --query "AutoScalingGroups[].Instances[].InstanceId" --output text) --query 'Reservations[].Instances[].PublicIpAddress' --output json
aws lambda invoke --function-name WEKA-Prod-status-lambda --payload '{"type": "progress"}' --cli-binary-format raw-in-base64-out /dev/stdout
aws secretsmanager get-secret-value --secret-id arn:aws:secretsmanager:us-east-1:459693375476:secret:weka/WEKA-Prod/weka-password-g9bH-T2og7D --query SecretString --output text**
EOT
"cluster_name" = "Prod"
"ips_type" = "PublicIpAddress"
"lambda_status_name" = "WEKA-Prod-status-lambda"
**"local_ssh_private_key" = "/tmp/WEKA-Prod-private-key.pem"**
"nfs_protocol_gateways_ips" = tostring(null)
"smb_protocol_gateways_ips" = tostring(null)
**"ssh_user" = "ec2-user"**
"weka_cluster_password_secret_id" = "arn:aws:secretsmanager:us-east-1:459693375476:secret:weka/WEKA-Prod/weka-password-g9bH-T2og7D"
}## Protocol Nodes ##
## For deploying NFS protocol nodes ##
nfs_protocol_gateways_number = 2 # A minimum of two is required
## For deploying SMB protocol nodes ##
smb_protocol_gateways_number = 3 # A minimum of three is required ssh -l ec2-user -i /tmp/WEKA-Prod-private-key.pem 3.91.150.250{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::weka-tf-aws-releases*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:DeletePlacementGroup"
],
"Resource": "arn:aws:ec2:us-east-1:account-number:placement-group/prefix-cluster-name*"
},
{
"Effect": "Allow",
"Action": [
"ec2:DescribePlacementGroups"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstanceTypes"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateLaunchTemplate",
"ec2:CreateLaunchTemplateVersion",
"ec2:DeleteLaunchTemplate",
"ec2:DeleteLaunchTemplateVersions",
"ec2:ModifyLaunchTemplate",
"ec2:GetLaunchTemplateData"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeScalingActivities"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"autoscaling:CreateAutoScalingGroup",
"autoscaling:DeleteAutoScalingGroup",
"autoscaling:UpdateAutoScalingGroup",
"autoscaling:SetInstanceProtection",
"autoscaling:SuspendProcesses",
"autoscaling:AttachLoadBalancerTargetGroups",
"autoscaling:DetachLoadBalancerTargetGroups"
],
"Resource": [
"arn:aws:autoscaling:*:account-number:autoScalingGroup:*:autoScalingGroupName/prefix-cluster-name-autoscaling-group"
]
},
{
"Effect": "Allow",
"Action": [
"lambda:CreateFunction",
"lambda:DeleteFunction",
"lambda:GetFunction",
"lambda:ListFunctions",
"lambda:UpdateFunctionCode",
"lambda:UpdateFunctionConfiguration",
"lambda:ListVersionsByFunction",
"lambda:GetFunctionCodeSigningConfig",
"lambda:GetFunctionUrlConfig",
"lambda:CreateFunctionUrlConfig",
"lambda:DeleteFunctionUrlConfig",
"lambda:AddPermission",
"lambda:GetPolicy",
"lambda:RemovePermission"
],
"Resource": "arn:aws:lambda:*:account-number:function:prefix-cluster-name-*"
},
{
"Effect": "Allow",
"Action": [
"lambda:CreateEventSourceMapping",
"lambda:DeleteEventSourceMapping",
"lambda:GetEventSourceMapping",
"lambda:ListEventSourceMappings"
],
"Resource": "arn:aws:lambda:*:account-number:event-source-mapping:prefix-cluster-name-*"
},
{
"Sid": "ReadAMIData",
"Effect": "Allow",
"Action": [
"ec2:DescribeImages",
"ec2:DescribeImageAttribute",
"ec2:CopyImage"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:ImportKeyPair",
"ec2:CreateKeyPair",
"ec2:DeleteKeyPair",
"ec2:DescribeKeyPairs"
],
"Resource": "*"
},
{
"Action": [
"ec2:MonitorInstances",
"ec2:UnmonitorInstances",
"ec2:ModifyInstanceAttribute",
"ec2:RunInstances",
"ec2:CreateTags"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Sid": "DescribeSubnets",
"Effect": "Allow",
"Action": [
"ec2:DescribeSubnets"
],
"Resource": [
"*"
]
},
{
"Sid": "DescribeALB",
"Effect": "Allow",
"Action": [
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeTargetGroups",
"elasticloadbalancing:DescribeListeners"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:CreatePlacementGroup"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"elasticloadbalancing:CreateLoadBalancer",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:CreateTargetGroup",
"elasticloadbalancing:ModifyLoadBalancerAttributes",
"elasticloadbalancing:ModifyTargetGroupAttributes",
"elasticloadbalancing:DeleteLoadBalancer",
"elasticloadbalancing:DeleteTargetGroup",
"elasticloadbalancing:CreateListener",
"elasticloadbalancing:DeleteListener"
],
"Resource": [
"arn:aws:elasticloadbalancing:us-east-1:account-number:loadbalancer/app/prefix-cluster-name*",
"arn:aws:elasticloadbalancing:us-east-1:account-number:targetgroup/prefix-cluster-name*",
"arn:aws:elasticloadbalancing:us-east-1:account-number:listener/app/prefix-cluster-name*"
]
},
{
"Effect": "Allow",
"Action": [
"elasticloadbalancing:DescribeLoadBalancerAttributes",
"elasticloadbalancing:DescribeTargetGroupAttributes",
"elasticloadbalancing:DescribeTags"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:DescribeSecurityGroups",
"ec2:DescribeVpcs",
"ec2:DescribeLaunchTemplates",
"ec2:DescribeLaunchTemplateVersions",
"ec2:DescribeInstances",
"ec2:DescribeTags",
"ec2:DescribeInstanceAttribute",
"ec2:DescribeVolumes"
],
"Resource": [
"*"
]
},
{
"Sid": "Statement1",
"Effect": "Allow",
"Action": [
"states:Creacluster-nameateMachine",
"states:Delecluster-nameateMachine",
"states:TagResource",
"states:DescribeStateMachine",
"states:ListStateMachineVersions",
"states:ListStateMachines",
"states:ListTagsForResource"
],
"Resource": [
"arn:aws:states:us-east-1:account-number:stateMachine:prefix-cluster-name*"
]
},
{
"Sid": "Statement2",
"Effect": "Allow",
"Action": [
"ec2:TerminateInstances"
],
"Resource": [
"*"
]
}
]
}{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"secretsmanager:CreateSecret",
"secretsmanager:DeleteSecret",
"secretsmanager:DescribeSecret",
"secretsmanager:GetSecretValue",
"secretsmanager:ListSecrets",
"secretsmanager:UpdateSecret",
"secretsmanager:GetResourcePolicy",
"secretsmanager:ListSecretVersionIds",
"secretsmanager:PutSecretValue"
],
"Resource": [
"arn:aws:secretsmanager:*:7account-number:secret:weka/prefix-cluster-name/*"
]
},
{
"Effect": "Allow",
"Action": [
"dynamodb:PutItem",
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:UpdateItem"
],
"Resource": "arn:aws:dynamodb:*:7account-number:table/prefix-cluster-name*"
},
{
"Effect": "Allow",
"Action": [
"iam:CreatePolicy",
"iam:CreateRole",
"iam:DeleteRole",
"iam:DeletePolicy",
"iam:GetPolicy",
"iam:GetRole",
"iam:GetPolicyVersion",
"iam:ListRolePolicies",
"iam:ListInstanceProfilesForRole",
"iam:PassRole",
"iam:ListPolicyVersions",
"iam:ListAttachedRolePolicies",
"iam:ListAttachedGroupPolicies",
"iam:ListAttachedUserPolicies"
],
"Resource": [
"arn:aws:iam::7account-number:policy/prefix-cluster-name-*",
"arn:aws:iam::7account-number:role/prefix-cluster-name-*"
]
},
{
"Effect": "Allow",
"Action": [
"iam:AttachRolePolicy",
"iam:AttachGroupPolicy",
"iam:AttachUserPolicy",
"iam:DetachRolePolicy",
"iam:DetachGroupPolicy",
"iam:DetachUserPolicy"
],
"Resource": [
"arn:aws:iam::7account-number:policy/prefix-cluster-name-*",
"arn:aws:iam::7account-number:role/prefix-cluster-name-*",
"arn:aws:iam::7account-number:role/ck-cluster-name-weka-iam-role"
]
},
{
"Effect": "Allow",
"Action": [
"iam:GetPolicy",
"iam:ListEntitiesForPolicy"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"iam:GetInstanceProfile",
"iam:CreateInstanceProfile",
"iam:DeleteInstanceProfile",
"iam:AddRoleToInstanceProfile",
"iam:RemoveRoleFromInstanceProfile"
],
"Resource": "arn:aws:iam::*:instance-profile/prefix-cluster-name-*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:PutRetentionPolicy",
"logs:DeleteLogGroup"
],
"Resource": [
"arn:aws:logs:us-east-1:7account-number:log-group:/aws/lambda/prefix-cluster-name*",
"arn:aws:logs:us-east-1:7account-number:log-group:/aws/vendedlogs/states/prefix-cluster-name*"
]
},
{
"Effect": "Allow",
"Action": [
"events:TagResource",
"events:PutRule",
"events:DescribeRule",
"events:ListTagsForResource",
"events:DeleteRule",
"events:PutTargets",
"events:ListTargetsByRule",
"events:RemoveTargets"
],
"Resource": [
"arn:aws:events:us-east-1:7account-number:rule/prefix-cluster-name*"
]
},
{
"Effect": "Allow",
"Action": [
"dynamodb:CreateTable",
"dynamodb:DescribeTable",
"dynamodb:DescribeContinuousBackups",
"dynamodb:DescribeTimeToLive",
"dynamodb:ListTagsOfResource",
"dynamodb:DeleteTable"
],
"Resource": [
"arn:aws:dynamodb:us-east-1:7account-number:table/prefix-cluster-name*"
]
},
{
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups",
"logs:ListTagsLogGroup"
],
"Resource": [
"*"
]
}
]
}{
"Statement": [
{
"Action": [
"ec2:DescribeNetworkInterfaces",
"ec2:AttachNetworkInterface",
"ec2:CreateNetworkInterface",
"ec2:ModifyNetworkInterfaceAttribute",
"ec2:DeleteNetworkInterface"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Action": [
"lambda:InvokeFunction"
],
"Effect": "Allow",
"Resource": [
"arn:aws:lambda:*:*:function:prefix-cluster_name*"
]
},
{
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::prefix-cluster_name-obs/*"
]
},
{
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:DescribeLogStreams",
"logs:PutRetentionPolicy"
],
"Effect": "Allow",
"Resource": [
"arn:aws:logs:*:*:log-group:/wekaio/prefix-cluster_name*"
]
}
],
"Version": "2012-10-17"
}{
"Statement": [
{
"Action": [
"s3:CreateBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::prefix-cluster_name-obs"
]
},
{
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Effect": "Allow",
"Resource": [
"arn:aws:logs:*:*:log-group:/aws/lambda/prefix-cluster_name*:*"
]
},
{
"Action": [
"ec2:CreateNetworkInterface",
"ec2:DescribeNetworkInterfaces",
"ec2:DeleteNetworkInterface",
"ec2:ModifyInstanceAttribute",
"ec2:TerminateInstances",
"ec2:DescribeInstances"
],
"Effect": "Allow",
"Resource": [
"*"
]
},
{
"Action": [
"dynamodb:GetItem",
"dynamodb:UpdateItem"
],
"Effect": "Allow",
"Resource": [
"arn:aws:dynamodb:*:*:table/prefix-cluster_name-weka-deployment"
]
},
{
"Action": [
"secretsmanager:GetSecretValue",
"secretsmanager:PutSecretValue"
],
"Effect": "Allow",
"Resource": [
"arn:aws:secretsmanager:*:*:secret:weka/prefix-cluster_name/*"
]
},
{
"Action": [
"autoscaling:DetachInstances",
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:SetInstanceProtection"
],
"Effect": "Allow",
"Resource": [
"*"
]
}
],
"Version": "2012-10-17"
}{
"Statement": [
{
"Action": [
"lambda:InvokeFunction"
],
"Effect": "Allow",
"Resource": [
"arn:aws:lambda:*:*:function:prefix-cluster_name-*-lambda"
]
},
{
"Action": [
"logs:CreateLogDelivery",
"logs:GetLogDelivery",
"logs:UpdateLogDelivery",
"logs:DeleteLogDelivery",
"logs:ListLogDeliveries",
"logs:PutLogEvents",
"logs:PutResourcePolicy",
"logs:DescribeResourcePolicies",
"logs:DescribeLogGroups"
],
"Effect": "Allow",
"Resource": [
"*"
]
}
],
"Version": "2012-10-17"
}{
"Statement": [
{
"Action": [
"states:StartExecution"
],
"Effect": "Allow",
"Resource": [
"arn:aws:states:*:*:stateMachine:prefix-cluster_name-scale-down-state-machine"
]
}
],
"Version": "2012-10-17"
}



























Explore the tasks you can program using the WEKA REST API, equivalent CLI commands, and the related information to learn the theory.
To maximize your success with the REST API, it's essential to familiarize yourself with the comprehensive documentation. This valuable resource provides in-depth insights into the subject matter. Moreover, each REST API method corresponds to a CLI command. Additionally, many parameters accessible through the CLI are equally accessible when using the REST API. Run the CLI command help for details. This ensures a smooth and consistent experience across both interfaces.
Related information: User management
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information:
Related information
Unmute alerts by type: Reactivate specific types of alerts.
weka alerts unmute <alert-type>
Set cloud WEKA Home upload rate: Define the preferred data upload speed to the cloud service.
weka cloud upload-rate set --bytes-per-second <bps>
View cloud WEKA Home URL: Get the URL for accessing the cloud WEKA Home service.
weka cloud status
Enable cloud WEKA Home: Start using the cloud WEKA Home service.
weka cloud enable --cloud-url <cloud> --cloud-stats <on/off>
Disable cloud WEKA Home: Stop using the cloud WEKA Home service.
weka cloud disable
Remove a container: Stop and delete a container from the cluster.
weka cluster container remove <container-ids>
Apply configuration updates: Implement changes to all containers.
weka cluster container apply
Apply configuration updates: Implement changes to specific containers.
weka cluster container apply <container-ids>
Clear container failure: Reset the error record for a container.
weka cluster container clear-failure<container-ids>
Monitor container resources: Track resource usage (CPU, memory) for containers.
weka cluster container resources <container-ids>
Start all containers: Bring all inactive containers online and running.
weka cluster container activate
Start a specific container: Activate an individual container by name or identifier.
weka cluster container activate <container-ids>
Stop all containers: Gracefully shut down all running containers.
weka cluster container deactivate
Stop a specific container: Deactivate an individual container by name or identifier.
weka cluster container deactivate <container-ids>
View network details for all containers: See the network configuration and connectivity information for each container within the cluster.
weka cluster container net
View network details for a specific container: See the network configuration and connectivity information for a single container specified by its name or identifier.
weka cluster container net <container-ids>
Assign dedicated network: Give a container its network device (apply afterward to activate).
weka cluster container net add <container-ids>
Remove dedicated network: Take away a container's dedicated network device (apply afterward to activate).
weka cluster container net remove <container-ids>
View container hardware: See hardware details (IP addresses) for containers.
weka cluster container info-hw
Activate SSD drives in the cluster: Bring one or more SSD drives online and make them available for use in the cluster.
weka cluster drive activate <uuids>
Deactivate SSD drives in the cluster: Temporarily take one or more SSD drives offline, preventing their use in the cluster while preserving the stored data.
weka cluster drive deactivate <uuids>
Delete a filesystem: Remove a chosen filesystem and its data from the cluster.
weka fs delete <name>
Attach an object store bucket: Link an object store bucket to a filesystem, allowing data access from both locations.
weka fs tier s3 attach <fs-name>
Detach an object store bucket: Disconnect an object store bucket from a filesystem, separating their data access.
weka fs tier s3 detach <fs-name> <obs-name>
Restore a filesystem from a snapshot: Create a new filesystem based on a saved snapshot stored in an object store bucket.
weka fs download
View thin-provisioning status: Check the existing allocated thin-provisioning space reserved for your organization within the cluster.
weka fs reserve status
Reserve guaranteed SSD for your organization: Set the thin-provisioning space for your organization's filesystems.
weka fs reserve set <ssd-capacity>
Release dedicated SSD space for your organization: Remove the existing reserved thin-provisioning space allocated for your organization's filesystems.
weka fs reserve unset --org <org>
Get metadata for a specific file or directory: See detailed information about a specific file or directory using its unique identifier "inode context".
weka debug fs resolve-inode
Update directory quota parameters: Modify specific settings (like grace period) for an existing directory quota.
weka fs quota set <path> --soft <soft> --hard <hard> --grace <grace> --owner <owner>
Remove a directory quota (empty only): Disable the quota restrictions for a directory (requires a directory with no existing files).
weka fs quota unset <path>
Set/update default quota: Establish or change the default quota applied to all newly created directories.
weka fs quota set-default <path>
Unset a default directory quota: Disable the pre-defined quota restrictions automatically applied to new directories within the filesystem.
weka fs quota unset-default <path>
Delete a filesystem group: Remove a filesystem group and its associated permissions.
weka fs group delete <name>
Update an interface group: Modify the settings of an existing interface group.
weka nfs interface-group update <name>
Add an IP range to an interface group: Define a specific range of IP addresses within the existing interface group for network access.
weka nfs interface-group ip-range add <name> <ips>
Add a port to an interface group: Assign a specific port number to the interface group, making it accessible through that port.
weka nfs interface-group port add <name> <server-id> <port>
Remove an IP range from an interface group: Delete a previously defined IP range from the interface group, disabling its access.
weka nfs interface-group port delete <name> <port>
Remove a port from an interface group: Unassign a specific port from the interface group, making it no longer accessible through that port.
weka nfs interface-group ip-range delete <name> <ips>
View floating IPs: See a list of all allocated floating IPs and their existing assignments.
weka nfs interface-group assignment
Add port for all interface groups: Assign a port to be accessible by the specified interface group.
weka nfs interface-group port add <name>
<server-id> <port>
Re-encrypt filesystems: Update the encryption keys for existing filesystems using the new KMS master key.
weka security kms rewrap
Revoke access: Remove permissions for client groups to access a designated NFS-mounted filesystem.
weka nfs permission delete <fs_name> <client-group-name>
View NFS client groups: See a list of all defined client groups for managing NFS access control.
weka nfs client-group
Create/add NFS client group: Establish a new group to manage access controls for NFS mounts.
weka nfs client-group add <group-name>
View a specific NFS client group: See a specific NFS client group for managing NFS access control.
weka nfs client-group --name <client-group-name>
Delete an NFS client group: Remove an existing NFS client group.
weka nfs client-group delete <client-group-name>
Add a DNS rule: Assign a DNS rule to an NFS client group for access control.
weka nfs rules add dns <client-group-name> <dns-rule>
Remove a DNS rule: Delete a DNS rule associated with an NFS client group.
weka nfs rules delete dns <client-group-name> <dns-rule>
Configure cluster-wide NFS settings: Manage global parameters for NFS operations, including the mountd service port, configuration filesystem for NFSv4, and supported NFS versions.
weka nfs global-config set
View cluster-wide NFS configuration: Get the global parameters for NFS operations, including the mountd service port, configuration filesystem for NFSv4, and supported NFS versions.
weka nfs global-config show
View logging verbosity: Check the existing logging level for container processes involved in the NFS cluster.
weka nfs debug-level show
Set logging verbosity: Adjust the logging level for container processes involved in the NFS cluster.
weka nfs debug-level set <debug-level>
Update an S3 connection: Modify an existing S3 object store bucket connection.
weka fs tier s3 update <bucket-name>
View snapshots: List and view details about uploaded snapshots within an object store.
weka fs tier s3 snapshot list <bucket-name>
Delete organization: Remove an organization from the cluster.
weka org delete <org name or ID>
Update organization name: Change the name of an existing organization.
weka org rename <org name or ID> <new-org-name>
Set organization quotas: Define SSD and total storage quotas for an organization.
weka org set-quota <org name or ID>
View buckets: See a list of all buckets within an S3 cluster.
weka s3 bucket list
Create an S3 bucket: Establish a new bucket within an S3 cluster.
weka s3 bucket create
View S3 user policies: See a list of S3 user policies.
weka s3 bucket policy
Delete an S3 bucket: Delete a specified S3 bucket.
weka s3 bucket delete <bucket-name>
View S3 IAM policies: See a list of S3 IAM policies.
weka s3 policy list
Add an S3 IAM policy: Create a new S3 IAM policy.
weka s3 policy add
View S3 IAM policy details: See details about a specific S3 IAM policy.
weka s3 policy show <policy-name>
Remove an S3 IAM policy: Delete an S3 IAM policy.
weka s3 policy remove <policy-name>
Attach an S3 IAM policy to a user: Assign an S3 IAM policy to a user.
weka s3 policy attach <policy> <user>
Detach an S3 IAM policy from a user: Remove an S3 IAM policy from a user.
weka s3 policy detach <user>
View service accounts: See a list of S3 service accounts.
weka s3 service-account list
Create an S3 service account: Establish a new S3 service account.
weka s3 service-account add <policy-file>
View service account details: See details about a specific S3 service account.
weka s3 service-account show <access-key>
Delete an S3 service account: Remove an S3 service account.
weka s3 service-account remove <access-key>
Create an S3 STS token: Create an S3 STS token with an assumed role.
weka s3 sts assume-role
Add lifecycle rule: Create a new lifecycle rule for an S3 bucket.
weka s3 bucket lifecycle-rule add <bucket-name>
Reset lifecycle rules: Reset all lifecycle rules for an S3 bucket to their default settings.
weka s3 bucket lifecycle-rule reset <bucket-name>
View lifecycle rules: See a list of all lifecycle rules for an S3 bucket.
weka s3 bucket lifecycle-rule list <bucket-name>
Delete lifecycle rule: Remove a lifecycle rule from an S3 bucket.
weka s3 bucket lifecycle-rule remove <bucket-name> <rule-name>
View S3 bucket policy: See the policy attached to an S3 bucket.
weka s3 bucket policy get <bucket-name>
Set S3 bucket policy: Assign a policy to an S3 bucket.
weka s3 bucket policy set <bucket-name> <bucket-policy>
View S3 bucket policy (JSON): See the bucket policy in JSON format.
weka s3 bucket policy get-json <bucket-name>
Set S3 bucket policy (JSON): Set the bucket policy using a JSON file.
weka s3 bucket policy set-custom <bucket-name> <policy-file>
Set S3 bucket quota: Define a storage quota for an S3 bucket.
weka s3 bucket quota set <bucket-name> <hard-quota>
Unset S3 bucket quota: Remove a storage quota from an S3 bucket.
weka s3 bucket quota unset <bucket-name>
View container readiness: Check the readiness status of containers within the S3 cluster.
weka s3 cluster status
Add container to S3 cluster: Add a container to the S3 cluster.
weka s3 cluster containers add <container-ids>
Remove containers: Remove containers from the S3 cluster.
weka s3 cluster containers remove <container-ids>
View logging verbosity: See the logging level for container processes within the S3 cluster.
weka s3 log-level get
Set logging verbosity: Adjust the logging level for container processes within the S3 cluster.
weka s3 log-level set <log-level>
Enable S3 audit webhook: Activate the S3 audit webhook.
weka s3 cluster audit-webhook enable
Disable S3 audit webhook: Deactivate the S3 audit webhook.
weka s3 cluster audit-webhook disable
View S3 audit webhook configuration: See details about the S3 audit webhook configuration.
weka s3 cluster audit-webhook show
View trusted domains (SMB): See a list of trusted domains recognized by the SMB cluster (not yet supported on SMB-W).
weka smb cluster trusted-domains
Add trusted domain (SMB): Add a new trusted domain to the SMB cluster (not yet supported on SMB-W).
weka smb cluster trusted-domains add
View SMB mount options: See a list of mount options used by the existing SMB cluster.
N/A
View SMB shares: See a list of all shares available within the SMB cluster.
weka smb share
Add SMB share: Create a new share within the SMB cluster.
weka smb share add <share-name> <fs-name>
Join Active Directory: Integrate the SMB cluster with an Active Directory domain.
weka smb domain join <username> <password>
Leave Active Directory: Disconnect the SMB cluster from the Active Directory domain.
weka smb domain leave <username>
Set SMB container logging verbosity: Adjust the logging level for container processes in the SMB cluster.
weka smb cluster debug <level>
Update SMB share: Modify the configuration of an existing SMB share.
weka smb share update <share-id>
Delete SMB share: Remove an SMB share from the cluster.
weka smb share remove <share-id>
Remove trusted domain (SMB): Remove a trusted domain from the SMB cluster.
weka smb cluster trusted-domains remove
Add SMB share users: Add users associated with a specific SMB share.
weka smb share lists add <share-id> <user-list-type> --users <users>
Remove SMB share users: Remove users associated with a specific SMB share.
weka smb share lists reset <share-id> <user-list-type>
Remove specific SMB share users: Remove specific users associated with a specific SMB share.
weka smb share lists remove <share-id> <user-list-type> --users <users>
View SMB container status: Check the status of containers participating in the SMB cluster.
weka smb cluster status
Add SMB cluster containers: Add containers to the SMB cluster.
weka smb cluster containers add --containers-id <containers-id>
Remove SMB cluster containers: Remove containers from the SMB cluster.
weka smb cluster containers remove --containers-id <containers-id>
Hide login banner: Hide the login banner from the sign-in page.
weka security login-banner disable
Add or update custom CA certificate: Upload a custom CA certificate to be used for authentication. If a certificate is already present, this command replaces it.
weka security ca-cert set
Delete custom CA certificate: Remove the currently configured custom CA certificate from the cluster.
weka security ca-cert unset
View cluster CA certificate: See the status and details of the cluster's CA certificate.
weka security ca-cert status
Delete snapshot: Remove a snapshot from the system.
weka fs snapshot delete <file-system> <snapshot-name>
Copy snapshot: Copy a snapshot from the same filesystem to a different location.
weka fs snapshot copy <file-system> <source-name> <destination-name>
Upload snapshot to object store: Transfer a snapshot to an object storage.
weka fs snapshot upload <file-system> <snapshot-name>
Download snapshot: Download a snapshot from an object storage system.
weka fs snapshot download
Restore filesystem from snapshot: Restore a filesystem using a previously created snapshot.
weka fs snapshot download <file-system> <snapshot-locator>
Set stats retention: Define the duration for which statistics are stored.
weka stats retention set --days <num-of-days>
View background task limits: See the existing limitations on the number of background tasks running concurrently within the system. This information helps you understand the capacity for handling background processes.
weka cluster task limits
Set background task limits: Adjust the maximum number of background tasks allowed to run simultaneously. This allows you to control the system's resource allocation and potential performance impact from concurrent tasks.
weka cluster task limits set
Set trace freeze period: Set the duration for which trace data is preserved for investigation.
weka debug traces freeze set
Clear frozen traces: Remove all existing frozen traces and reset the freeze period to zero.
weka debug traces freeze reset
Set trace verbosity level: Modify the level of detail captured in trace logs. Low captures essential information for basic troubleshooting. High captures extensive details for in-depth analysis.
weka debug traces level set
Set a local user password: Assign a password to a local user.
weka user passwd
Update a local user password: For any user, change your own password or the password of another user if you have the necessary permissions. For admins, change the password of any user within the organization.
weka user passwd <username>
View the logged-in user: Get information about the currently logged-in user.
weka user whoami
Invalidate user sessions: Immediately terminate all active login sessions (GUI, CLI, API) associated with a specific internal user. This action prevents further access to the system using those tokens.
weka user revoke-tokens
Update Active Directory: Change the cluster's configuration to use a different Active Directory server or modify its settings.
weka user ldap setup-ad
View all alerts: Get a complete list of active alerts, including silenced ones.
weka alerts
List possible alerts: See all types of alerts the cluster can generate.
weka alerts types
List alert types with actions: View different alert types and their recommended troubleshooting steps.
weka alerts describe
Mute alerts by type: Silence specific types of alerts.
weka alerts mute <alert-type> <duration>
View cloud WEKA Home configuration: See the existing settings for the cloud WEKA Home service.
weka cloud status
View cloud WEKA Home proxy URL: Get the existing URL to access cloud services.
weka cloud proxy
Set cloud WEKA Home proxy URL: Change the URL used to access cloud services.
weka cloud proxy --set <proxy_url>
View cloud WEKA Home upload rate: See the existing data upload speed to the cloud service.
weka cloud upload-rate
Create a cluster: Start a new cluster with chosen configurations.
weka cluster create <host-hostnames>
Update cluster configuration: Modify settings for an existing cluster.
weka cluster update
View cluster status: Check the overall health and performance of the cluster.
weka status --json
List containers: See all containers running in the cluster.
weka cluster container
Add a container: Introduce a new container to the cluster (apply afterward to activate).
weka cluster container add <hostname>
View container details: Get information about a specific container (resources, state).
weka cluster container <container-ids>
Update container configuration: Change settings for a container (cores, memory).
weka cluster container <container-ids> <subcommand>
Check default network setup: Review the predefined network properties for container deployments.
weka cluster default-net
Define new network defaults: Define the IP address range, gateway address, and subnet mask to be used for future container network assignments.
weka cluster default-net set
Modify existing network defaults: Change the parameters like IP range, gateway, or subnet mask used for future container network assignments.
weka cluster default-net update
Clear custom network defaults: Remove any modifications to the standard network settings and return to the initial baseline.
weka cluster default-net reset
View a list of all SSD drives in the cluster: Get information about all available SSD drives within the cluster, including size, UUID, status, and more. drive
weka cluster drive
Add a new SSD drive to a container: Attach an additional SSD drive to a specific container within the cluster to expand its available resources.
weka cluster drive add <container-id> <device-paths>
View a specific SSD drive in the cluster: Get detailed information about a particular SSD drive in the cluster.
weka cluster drive <uuids>
Remove an SSD drive from the cluster: Detach an SSD drive from the cluster, making it unavailable for further use.
weka cluster drive remove <uuids>
Filter and explore events: Find specific events in the cluster by applying filters based on criteria like severity, category, and time range.
weka events
Get event details: View a detailed description of a specific event type, including its meaning and potential causes.
weka events list-types
Analyze event trends: See how events occur over time by aggregating them within a specific time interval.
weka events --start-time <start> --end-time <end> --show-internal
Trace events by server: Focus on events generated by a specific server in the cluster for deeper troubleshooting.
weka events list-local
Create custom events: Trigger and record your custom events with additional user-defined parameters for enhanced monitoring and logging.
weka events trigger-event
View all failure domains: Get a list of all available failure domains within the cluster.
weka cluster failure-domain
View details of a specific failure domain: See information about a single failure domain, including its resources and capacity.
weka cluster container <container-ids>
List all filesystems: Get a complete list of all defined filesystems in the cluster.
weka fs
Create a new filesystem: Configure and establish a new filesystem within the cluster.
weka fs create
View details of a specific filesystem: Obtain specific information about a specified filesystem, like its size, quota, and usage.
weka fs --name <name>
Modify a filesystem: Change the settings or properties of an existing filesystem.
weka fs update <name>
View quotas: See a list of the existing quota settings for all directories within the filesystem.
weka fs quota list <fs-name>
View default quotas: Check the default quota configuration applied to new directories.
weka fs quota list-default
View/list the parameters of a specific directory quota
weka fs quota list <fs-name> --path <path>
Set/update a directory quota (empty only): Specify disk space limits for an individual directory (requires a directory with no existing files).
weka fs quota set <path>
View filesystem groups: See a list of all existing filesystem groups.
weka fs group
Create/add a filesystem group: Establish a new group to share and manage access control for certain filesystems.
weka fs group create
View filesystem group details: Get specific information about a particular filesystem group.
N/A
Update a filesystem group: Modify the properties of an existing filesystem group.
weka fs group update <name>
Check REST API status: Verify the existing functionality and availability of the REST API used for programmatic system access.
N/A
Check GUI status: Confirm the proper operation and responsiveness of the graphical user interface.
N/A
View interface groups: See a list of all interface groups configured in the system.
weka nfs interface-group
Create/add an interface group: Set up a new interface group to manage network configuration for specific P addresses and ports.
weka nfs interface-group add
View interface group details: See specific information about a particular interface group.
weka nfs interface-group --name <name>
Delete an interface group: Remove an interface group and its associated network definitions.
weka nfs interface-group delete <name>
View KMS configuration: See the existing Key Management Service (KMS) settings for encrypting filesystems.
weka security kms
Set configuration (new KMS): Establish a new KMS configuration with details like type, address, and key identifier.
weka security kms set <type> <address> <key-identifier>
Delete configuration (unused only): Remove the KMS configuration if no encrypted filesystems rely on it.
weka security kms unset
View existing KMS type: Find out whether HashiCorp Vault or KMIP is used for KMS.
weka security kms
View LDAP configuration: Get detailed information about the configured settings for connecting to your LDAP server. This includes information like the server address, port, base DN, and authentication method.
weka user ldap
Update LDAP configuration: Modify the existing settings used for connecting to your LDAP server. This may involve changing the server details, authentication credentials, or other relevant parameters.
weka user ldap setup
Disable LDAP: Deactivate the integration with your LDAP server for user authentication.
weka user ldap disable
View license details: Get information about the configured cluster license, including resource usage and validity.
weka cluster license
Set license: Install a new cluster license for continued operation.
weka cluster license set <license>
Remove license: Deactivate the existing license and return the cluster to unlicensed mode.
weka cluster license reset
View policy: See the configured settings for the lockout policy, including attempt limits and duration.
weka security lockout-config show
Update policy: Modify the parameters of the lockout policy to adjust login security.
weka security lockout-config set
Reset lockout: Clear the failed login attempts counter and unlock any currently locked accounts.
weka security lockout-config reset
Log in to the cluster: Authenticate and grant access to the cluster using valid credentials. Securely save user credentials in the user's home directory upon successful login.
weka user login
Retrieve access token: Obtain a new access token using an existing refresh token. The system creates an authentication token file and saves it in ~/.weka/auth-token.json. The token file contains both the access token and the refresh token.
weka user login
View cluster-wide mount options: See the configured mount options applied to all filesystems across the cluster.
weka cluster mount-defaults show
Set cluster-wide mount options: Configure default options for mounting filesystems across the cluster.
weka cluster mount-defaults set
Reset cluster-wide mount options: Revert default mount options to initial settings for all filesystems in the cluster.
weka cluster mount-defaults reset
View NFS permissions: See a list of the existing access controls for client groups accessing filesystems through NFS.
weka nfs permission
Grant NFS permissions: Assign permissions for a specific client group to access a designated NFS-mounted filesystem.
weka nfs permission add <fs_name> <client-group-name>
View NFS permissions of a specific filesystem: See existing access controls for client groups accessing a specific filesystem through NFS.
weka nfs permission --filesystem <fs_name>
Modify NFS permissions: Update existing access controls for client groups using an NFS-mounted filesystem.
weka nfs permission update <fs_name> <client-group-name>
Update object store connection: Update details for an existing object store connection.
weka fs tier obs update <obs-name>
View S3 configurations: See a list of connection and status details for all S3 object store buckets.
weka fs tier s3
Create an S3 connection: Establish a new S3 object store bucket connection.
weka fs tier s3 add <obs-name>
View an S3 connection: See a list of connection and status details for a specific S3 object store bucket.
weka fs tier s3 --obs-name <obs-name> --name <bucket-name>
Delete an S3 connection: Remove an existing S3 object store connection.
weka fs tier s3 delete <obs-name>
Check for multiple organizations: Verify if multiple organizations exist within the cluster.
weka org
View organizations: See a list of all organizations defined in the cluster.
weka org
Add organization: Create a new organization within the cluster.
weka org create
View organization details: See information about an existing organization.
weka org <org name or ID>
View all processes' details: See information about all running processes within the cluster.
weka cluster processes
View process details: See information about a specific process based on its ID.
weka cluster processes <process-ids>
View S3 cluster information: See details about the S3 cluster managed by WEKA.
weka s3 cluster
Create an S3 cluster: Establish a new S3 cluster.
weka s3 cluster create
Update an S3 cluster: Modify the configuration of an existing S3 cluster.
weka s3 cluster update
Delete an S3 cluster: Remove an S3 cluster.
weka s3 cluster destroy
View SMB cluster configuration: See details about the existing SMB cluster configuration.
weka smb cluster
Create SMB cluster: Establish a new SMB cluster managed by WEKA.
weka smb cluster create <netbios-name> <domain> <config-fs-name>
Update SMB cluster configuration: Modify the existing configuration of an SMB cluster.
weka smb cluster update
Remove SMB cluster configuration: Disable SMB access to data without affecting the data itself.
weka smb cluster destroy
View token expiry: See the default expiry time for tokens.
N/A
View login banner: See the existing login banner displayed on the sign-in page.
weka security login-banner show
Set login banner: Create or modify the login banner containing a security statement or legal message.
weka security login-banner set <login-banner>
Show login banner: Show the login banner on the sign-in page.
weka security login-banner enable
View cluster servers: See a list of all servers within the cluster.
weka cluster servers list
View server details: See specific information about an individual server based on its UID.
weka cluster servers show
View snapshots: See a list of all snapshots currently available.
weka fs snapshot
Create snapshot: Establish a new snapshot of a filesystem.
weka fs snapshot create <file-system> <snapshot-name>
View snapshot details: See specific information about an existing snapshot.
weka fs snapshot --name <snapshot-name>
Update snapshot: Modify the configuration of an existing snapshot.
weka fs snapshot update <file-system> <snapshot-name>
View stats: See a list of various statistics related to the cluster's performance and resource usage.
weka stats
View stats description: Get detailed explanations of the available statistics.
weka stats list-types
View real-time stats: Monitor live statistics for the cluster.
weka stats realtime
View stats retention and disk usage: See how long statistics are retained and estimate disk space used for storage.
weka stats retention status
Start cluster IO services: Enable the cluster-wide IO services.
weka cluster start-io
Stop cluster IO services: Disable the cluster-wide IO services.
weka cluster stop-io
View background tasks: See a list of all currently running background tasks within the cluster.
weka cluster task
Resume a background task: Re-initiate a paused background task, allowing execution to continue.
weka cluster task resume <task-id>
Pause a background task: Temporarily halt the execution of a running background task. The task can be resumed later.
weka cluster task pause <task-id>
Abort a background task: Terminate a running background task, permanently stopping its execution. Any unfinished work associated with the task will be discarded.
weka cluster task abort <task-id>
View cluster TLS status: Check the status and details of the cluster's TLS certificate.
weka security tls status
Configure Nginx with TLS: Enable TLS for the UI and set or update the private key and certificate.
weka security tls set
Configure Nginx without TLS: Disable TLS for the UI.
weka security tls unset
Download TLS certificate: Download the cluster's TLS certificate.
weka security tls download
View traces configuration: See the current configuration settings for trace collection.
weka debug traces status
Start trace collection: Initiate the collection of trace data.
weka debug traces start
Stop trace collection: Stop the collection of trace data.
weka debug traces stop
View trace freeze period: See the duration for which trace data is preserved for investigation.
weka debug traces freeze show
View local users: See a list of all local users on the system.
weka user
Create a local user: Add a new local user account.
weka user add <username> <role> <password>
Update a local user: Modify the details of an existing local user.
weka user update <username>
Delete a local user: Remove a local user account from the system.
weka user delete <username>
This CLI reference guide is generated from the output of running the weka command with the help option. It provides detailed descriptions of available commands, arguments, and options.
The base command for all weka related CLIs
--agent
Start the agent service
Commands that control the weka agent (outside the weka containers)
Installs Weka agent on the machine the command is executed from
Update the currently available containers and version specs to the current agent version. This command does not update weka, only the container's representation on the local machine.
List the Weka spec versions that are supported by this agent version
Deletes all Weka files, drivers, shared memory and any other remainder from the machine this command is executed from. WARNING - This action is destructive and might cause a loss of data!
Bash autocompletion utilities
weka agent autocomplete install
Locally install bash autocompletion utility
weka agent autocomplete uninstall
Locally uninstall bash autocompletion utility
weka agent autocomplete export
Export bash autocompletion script
List alerts in the Weka cluster
List all alert types that can be returned from the Weka cluster
Mute an alert-type. Muted alerts will not be prompted when listing active alerts. Alerts cannot be suppressed indefinitely, so a duration must be supplied. Once the supplied duration has passed, the alert-type would be automatically unmuted
Unmute an alert-type which was previously muted.
Describe all the alert types that might be returned from the weka cluster (including explanations and how to handle them)
Cloud commands. List the cluster's cloud status, if no subcommand supplied.
Show cloud connectivity status
Turn cloud features on
Turn cloud features off
Get or set the HTTP proxy used to connect to cloud services
Update cloud settings
Get the cloud upload rate
weka cloud upload-rate set
Set the cloud upload rate
Commands that manage the cluster
Form a Weka cluster from hosts that just has Weka installed on them
Update cluster configuration
List the cluster processes
List the cluster buckets, logical compute units used to divide the workload in the cluster
List the Weka cluster failure domains
Get or set the number of hot-spare failure-domains in the cluster. If param is not given, the current number of hot-spare FDs will be listed
Start IO services
Stop IO services
List the cluster's drives
weka cluster drive scan
Scan for provisioned drives on the cluster's containers
weka cluster drive activate
Activate the supplied drive, or all drives (if none supplied)
weka cluster drive deactivate
Deactivate the supplied drive(s)
weka cluster drive add
Add the given drive
weka cluster drive remove
Remove the supplied drive(s)
Commands for editing default mount options
weka cluster mount-defaults set
Set default mount options.
weka cluster mount-defaults show
View default mount options
weka cluster mount-defaults reset
Reset default mount options
Commands for physical servers
weka cluster servers list
List the cluster servers
weka cluster servers show
Show a single server overview according to given server uid
List the cluster containers
weka cluster container info-hw
Show hardware information about one or more containers
weka cluster container failure-domain
Set the container failure-domain
weka cluster container dedicate
Set the container as dedicated to weka. For example it can be rebooted whenever needed, and configured by weka for optimal performance and stability
weka cluster container bandwidth
Limit weka's bandwidth for the container
weka cluster container cores
Dedicate container's cores to weka
weka cluster container memory
Dedicate a set amount of RAM to weka
weka cluster container auto-remove-timeout
Set how long to wait before removing this container if it disconnects from the cluster (for clients only)
weka cluster container management-ips
Set the container's management process IPs. Setting 2 IPs will turn this containers networking into highly-available mode
weka cluster container resources
Get the resources of the supplied container
weka cluster container restore
Restore staged resources of the supplied containers, or all containers, to their stable state
weka cluster container apply
Apply the staged resources of the supplied containers, or all containers
weka cluster container activate
Activate the supplied containers, or all containers (if none supplied)
weka cluster container deactivate
Deactivate the supplied container(s)
weka cluster container clear-failure
Clear the last failure fields for all supplied containers
weka cluster container add
Add a container to the cluster
weka cluster container remove
Remove a container from the cluster
weka cluster container factory-reset
Factory resets the containers. NOTE! This can't be undone!
weka cluster container net
List Weka dedicated networking devices in a container
weka cluster container net add
Allocate a dedicated networking device on a container (to the cluster).
weka cluster container net remove
Undedicate a networking device in a container.
List the default data networking configuration
weka cluster default-net set
Set the default data networking configuration
weka cluster default-net update
Update the default data networking configuration
weka cluster default-net reset
Reset the default data networking configuration
Get information about the current license status, how much resources are being used in the cluster and whether or not your current license is valid.
weka cluster license payg
Enable pay-as-you-go for the cluster
weka cluster license reset
Removes existing license information, returning the cluster to an unlicensed mode
weka cluster license set
Set the cluster license
List the currently running background tasks and their status
weka cluster task pause
Pause a currently running background task
weka cluster task resume
Resume a currently paused background task
weka cluster task abort
Abort a currently running background task
weka cluster task limits
List the current limits for background tasks
weka cluster task limits set
Set the limits for background tasks
Commands that manage the clients target version
weka cluster client-target-version show
Show clients target version to be used in case of upgrade or a new mount (stateless client).
weka cluster client-target-version set
Determine clients target version to be used in case of upgrade or a new mount (stateless client).
weka cluster client-target-version reset
Clear cluster's client target version value
Diagnostics commands to help understand the status of the cluster and its environment
Collect diags from all cluster hosts to a directory on the host running this command
Prints results of a previously collected diags report
Stop a running instance of diags, and cancel its uploads.
Collect and upload diags from all cluster hosts to Weka's support cloud
List all events that conform to the filter criteria
List recent events that happened on the machine running this command
Show the event type definition information
Trigger a custom event with a user defined parameter
List filesystems defined in this Weka cluster
Create a filesystem
Download a filesystem from object store
Update a filesystem
Delete a filesystem
Restore filesystem content from a snapshot
Commands used to control directory quotas
weka fs quota set
Set a directory quota in a filesystem
weka fs quota set-default
Set a default directory quota in a filesystem
weka fs quota unset
Unsets a directory quota in a filesystem
weka fs quota unset-default
Unsets a default directory quota in a filesystem
weka fs quota list
List filesystem quotas (by default, only exceeding ones)
weka fs quota list-default
List filesystem default quotas
List filesystem groups
weka fs group create
Create a filesystem group
weka fs group update
Update a filesystem group
weka fs group delete
Delete a filesystem group
List snapshots
weka fs snapshot create
Create a snapshot
weka fs snapshot copy
Copy one snapshot over another
weka fs snapshot update
Update snapshot parameters
weka fs snapshot access-point-naming-convention
Access point naming convention
weka fs snapshot access-point-naming-convention status
Show access point naming convention
weka fs snapshot access-point-naming-convention update
Update access point naming convention
weka fs snapshot upload
Upload a snapshot to object store
weka fs snapshot download
Download a snapshot into an existing filesystem
weka fs snapshot delete
Delete a snapshot
Show object store connectivity for each node in the cluster
weka fs tier location
Show data storage location for a given path
weka fs tier fetch
Fetch object-stored files to SSD storage
weka fs tier release
Release object-stored files from SSD storage
weka fs tier capacity
List capacities for object store buckets attached to filesystems
weka fs tier s3
List S3 object store buckets configuration and status
weka fs tier s3 add
Create a new S3 object store bucket connection
weka fs tier s3 update
Edit an existing S3 object store bucket connection
weka fs tier s3 delete
Delete an existing S3 object store connection
weka fs tier s3 attach
Attach a filesystem to an existing Object Store
weka fs tier s3 detach
Detach a filesystem from an attached object store
weka fs tier s3 snapshot
Commands used to display info about uploaded snapshots
####### weka fs tier s3 snapshot list
List and show info about snapshots uploaded to Object Storage
weka fs tier ops
List all the operations currently running on an object store from all the hosts in the cluster
weka fs tier obs
List object stores configuration and status
weka fs tier obs update
Edit an existing object store
Thin provisioning reserve for organizations
weka fs reserve status
Thin provisioning reserve for organizations
weka fs reserve set
Set an organization's thin provisioning SSD reserve
weka fs reserve unset
Unset an organization's thin provisioning SSD's reserve
List interface groups
List the currently assigned interface for each floating-IP address in the given interface-group. If is not supplied, assignments for all floating-IP addresses will be listed
Create an interface group
Update an interface group
Delete an interface group
Commands that manage interface-groups' ip-ranges
weka interface-group ip-range add
Add an ip range to an interface group
weka interface-group ip-range delete
Delete an ip range from an interface group
Commands that manage interface-groups' ports
weka interface-group port add
Add a server port to an interface group
weka interface-group port delete
Delete a server port from an interface group
Commands that control weka and its containers on the local machine
Installs Weka agent on the machine the command is executed from
Collect diagnostics from the local machine
List the events saved to the local drive. This command does not require authentication and can be used when Weka is turned off.
List the Weka containers running on the machine this command is executed from
Delete a Weka container from the machine this command is executed from (this removed the data associated with the container, but retains the downloaded software)
Start a Weka container
Stop a Weka container
Restart a Weka container
Show the status of a Weka container
Enable monitoring for the requested containers so they automaticlly start on machine boot. This does not affect the current running status of the container. In order to change the current status, use the "weka local start/stop" commands. If no container names are specified, this command runs on all containers.
Disable containers by not launching them on machine boot. This does not affect the current running status of the container. In order to change the current status, use the "weka local start/stop" commands. If no container names are specified, this command runs on all containers.
Turn monitoring on/off for the given containers, or all containers if none are specified. When a container is started, it's always monitored. When a container is monitored, it will be restarted if it exits without being stopped through the CLI.
Execute a command inside a new container that has the same mounts as the given container. If no container is specified, either "default" or the only defined container is selected. If no command is specified, opens an interactive shell.
Resets the data directory for a given container, making the host no longer aware of the rest of the cluster
List and control container resources
weka local resources import
Import resources from file
weka local resources export
Export stable resources to file
weka local resources restore
Restore resources from Stable resources
weka local resources apply
Apply changes to resources locally
weka local resources cores
Change the core configuration of the host
weka local resources base-port
Change the port-range used by the container. Weka containers require 100 ports to operate.
weka local resources memory
Dedicate a set amount of RAM to weka
weka local resources dedicate
Set the host as dedicated to weka. For example it can be rebooted whenever needed, and configured by weka for optimal performance and stability
weka local resources bandwidth
Limit weka's bandwidth for the host
weka local resources management-ips
Set the host's management node IPs. Setting 2 IPs will turn this hosts networking into highly-available mode
weka local resources join-ips
Set the IPs and ports of all hosts in the cluster. This will enable the host to join the cluster using these IPs.
weka local resources failure-domain
Set the host failure-domain
weka local resources net
List and control container resources
weka local resources net add
Allocate a dedicated networking device on a host (to the cluster).
weka local resources net remove
Undedicate a networking device in a host.
Container setup commands
weka local setup weka
Setup a local weka container
weka local setup container
Setup a local weka container
Upgrade a Weka Host Container to its cluster version
Mounts a wekafs filesystem. This is the helper utility installed at /sbin/mount.wekafs.
Commands that manage client-groups, permissions and interface-groups
Commands that manage NFS-rules
weka nfs rules add
Commands that add NFS-rules
weka nfs rules add dns
Add a DNS rule to an NFS client group
weka nfs rules add ip
Add an IP rule to an NFS client group
weka nfs rules delete
Commands for deleting NFS-rules
weka nfs rules delete dns
Delete a DNS rule from an NFS client group
weka nfs rules delete ip
Delete an IP rule from an NFS client group
Lists NFS client groups
weka nfs client-group add
Create an NFS client group
weka nfs client-group delete
Delete an NFS client group
List NFS permissions for a filesystem
weka nfs permission add
Allow a client group to access a file system
weka nfs permission update
Edit a file system permission
weka nfs permission delete
Delete a file system permission
List interface groups
weka nfs interface-group assignment
List the currently assigned interface for each floating-IP address in the given interface-group. If is not supplied, assignments for all floating-IP addresses will be listed
weka nfs interface-group add
Create an interface group
weka nfs interface-group update
Update an interface group
weka nfs interface-group delete
Delete an interface group
weka nfs interface-group ip-range
Commands that manage nfs interface-groups' ip-ranges
weka nfs interface-group ip-range add
Add an ip range to an interface group
weka nfs interface-group ip-range delete
Delete an ip range from an interface group
weka nfs interface-group port
Commands that manage nfs interface-groups' ports
weka nfs interface-group port add
Add a server port to an interface group
weka nfs interface-group port delete
Delete a server port from an interface group
Manage debug level for nfs servers.
weka nfs debug-level show
Get debug level for nfs servers.
weka nfs debug-level set
Set debug level for nfs servers. Return to default (EVENT) when finish debugging.
NFS Global Configuration
weka nfs global-config set
Set NFS global configuration options
weka nfs global-config show
Show the NFS global configuration
NFS Clients usage information
weka nfs clients show
Show NFS Clients usage information. If no options are given, all NFS Ganesha containers will be selected.
List organizations defined in the Weka cluster
Create a new organization in the Weka cluster
Change an organization name
Set an organization's SSD and/or total quotas
Delete an organization
Security commands.
List the currently configured key management service settings
weka security kms set
Configure the active KMS
weka security kms unset
Remove external KMS configurations. This will fail if there are any encrypted filesystems that rely on the KMS.
weka security kms rewrap
Rewraps all the master filesystem keys using the configured KMS. This can be used to rewrap with a rotated KMS key, or to change wrapping to the newly-configured KMS.
TLS commands.
weka security tls status
Show the Weka cluster TLS status and certificate
weka security tls download
Download the Weka cluster TLS certificate
weka security tls set
Make Ngnix use TLS when accessing UI. If TLS already set this command updates the key and certificate.
weka security tls unset
Make Ngnix not use TLS when accessing UI
Commands used to interact with the account lockout config parameters
weka security lockout-config set
Configure the number of failed attempts before lockout and the duration of lock
weka security lockout-config reset
Reset the number of failed attempts before lockout and the duration of lock to their defaults
weka security lockout-config show
Show the current number of attempts needed to lockout and how long the lockout is for
Commands used to view and edit the login banner
weka security login-banner set
Set the login banner
weka security login-banner reset
Resets the login banner back to the default state (empty)
weka security login-banner enable
Enable the login banner
weka security login-banner disable
Disable the login banner
weka security login-banner show
Show the current login banner
Commands handling custom CA signed certificate
weka security ca-cert set
Add a custom certificate to the certificates list. If a custom certificate is already set, this command updates it.
weka security ca-cert status
Show the Weka cluster CA-cert status and certificate
weka security ca-cert download
Download the Weka cluster custom certificate, if such certificate was set
weka security ca-cert unset
Unsets custom CA signed certificate from cluster
Commands that manage Weka's SMB container
View info about the SMB cluster managed by weka
weka smb cluster containers
Update an SMB cluster containers
weka smb cluster containers add
Update an SMB cluster
weka smb cluster containers remove
Update an SMB cluster
weka smb cluster wait
Wait for SMB cluster to become ready
weka smb cluster update
Update an SMB cluster
weka smb cluster create
Create a SMB cluster managed by weka
weka smb cluster debug
Set debug level in an SMB container
weka smb cluster destroy
Destroy the SMB cluster managed by weka. This will not delete the data, just stop exposing it via SMB
weka smb cluster trusted-domains
List all trusted domains
weka smb cluster trusted-domains add
Add a new trusted domain
weka smb cluster trusted-domains remove
Remove a trusted domain
weka smb cluster status
Show which of the containers are ready.
weka smb cluster host-access
Show host access help
weka smb cluster host-access list
Show host access list
weka smb cluster host-access reset
Reset host access lists
weka smb cluster host-access add
Add hosts to host access lists
weka smb cluster host-access remove
Remove hosts from a user list
List all shares exposed via SMB
weka smb share update
Update an SMB share
weka smb share lists
Show lists help
weka smb share lists show
Show user lists
weka smb share lists reset
Reset a user list
weka smb share lists add
Add users to a user list
weka smb share lists remove
Remove users from a user list
weka smb share add
Add a new share to be exposed by SMB
weka smb share remove
Remove a share exposed by SMB
weka smb share host-access
Show host access help
weka smb share host-access list
Show host access list
weka smb share host-access reset
Reset host access lists
weka smb share host-access add
Add hosts to host access lists
weka smb share host-access remove
Remove hosts from a user list
View info about the domain
weka smb domain join
Join cluster to Active Directory domain
weka smb domain leave
Leave Active Directory domain
List all statistics that conform to the filter criteria
Get performance related stats which are updated in a one-second interval.
Show the statistics definition information
Configure retention for statistics
weka stats retention set
Choose how long to keep statistics for
weka stats retention status
Show configured statistics retention
weka stats retention restore-default
Restore default retention for statistics
Get an overall status of the Weka cluster
Show the cluster phasing in/out progress, and protection per fault-level
Unmounts wekafs filesystems. This is the helper utility installed at /sbin/umount.wekafs.
Commands that control the upgrade precedure of Weka
List upgrade features supported by the running cluster
List users defined in the Weka cluster
Logs a user into the Weka cluster. If login is successful, the user credentials are saved to the user homedir.
Logs the current user out of the Weka cluster by removing the user credentials from WEKA_TOKEN if exists, or otherwise from the user homedir
Get information about currently logged-in user
Set a user's password. If the currently logged-in user is an admin, it can change the password for all other users in the organization.
Change the role of an existing user.
Change parameters of an existing user.
Create a new user in the Weka cluster
Delete user from the Weka cluster
Revoke all existing login tokens of an internal user
Generate an access token for the current logged in user for use with REST API
Show current LDAP configuration used for authenticating users
weka user ldap setup
Setup an LDAP server for user authentication
weka user ldap setup-ad
Setup an Active Directory server for user authentication
weka user ldap update
Edit LDAP server configuration
weka user ldap enable
Enable authentication through the configured LDAP server (has no effect if LDAP server is already enabled)
weka user ldap disable
Disable authentication through the configured LDAP server
weka user ldap reset
Delete all LDAP settings from the cluster
When run without arguments, lists the versions available on this machine. Subcommands allow for downloading of versions, setting the current version and other actions to manage versions.
List the Weka spec versions that are supported by this agent version
Download a Weka version to the machine this command is executed from
Set the current version. Containers must be stopped before setting the current version and the new version must have already been downloaded.
Unset the current version. Containers must be stopped before setting the current version and the new version must have already been downloaded.
Prints the current version. If no version is set, a failure exit status is returned.
Delete a version from the machine this command is executed from
Prepare the version for use. This includes things like compiling the version drivers for the local machine.
Commands that manage Weka's S3 container
View info about the S3 cluster managed by weka
weka s3 cluster create
Create an S3 cluster managed by weka
weka s3 cluster update
Update an S3 cluster
weka s3 cluster destroy
Destroy the S3 cluster managed by weka. This will not delete the data, just stop exposing it via S3
weka s3 cluster status
Show which of the containers are ready.
weka s3 cluster audit-webhook
S3 Cluster Audit Webhook Commands
weka s3 cluster audit-webhook enable
Enable/Disable the S3 audit webhook on the S3 Cluster
weka s3 cluster audit-webhook disable
Disable the Audit Webhook
weka s3 cluster audit-webhook show
Show the S3 Audit Webhook configuration
weka s3 cluster containers
Commands that manage Weka's S3 cluster's containers
weka s3 cluster containers add
Add S3 containers to S3 cluster
weka s3 cluster containers remove
Remove S3 containers from S3 cluster
weka s3 cluster containers list
Lists containers in S3 cluster
S3 Cluster Bucket Commands
weka s3 bucket create
Create an S3 bucket
weka s3 bucket list
Show all the buckets on the S3 cluster
weka s3 bucket destroy
Destroy an S3 bucket
weka s3 bucket lifecycle-rule
S3 Bucket Lifecycle
weka s3 bucket lifecycle-rule add
Add a lifecycle rule to an S3 Bucket
weka s3 bucket lifecycle-rule remove
Remove a lifecycle rule from an S3 bucket
weka s3 bucket lifecycle-rule reset
Reset all lifecycle rules of an S3 bucket
weka s3 bucket lifecycle-rule list
List all lifecycle rules of an S3 bucket
weka s3 bucket policy
S3 bucket policy commands
weka s3 bucket policy get
Get S3 policy for bucket
weka s3 bucket policy set
Set an existing S3 policy for a bucket, Available predefined options are : none|download|upload|public
weka s3 bucket policy unset
Unset the configured S3 policy for bucket
weka s3 bucket policy get-json
Get S3 policy for bucket in JSON format
weka s3 bucket policy set-custom
Set a custom S3 policy for bucket
weka s3 bucket quota
S3 Bucket Quota, configure the hard limit of bucket disk usage
weka s3 bucket quota set
Set the hard limit of bucket's disk usage
weka s3 bucket quota unset
Remove the hard limit on bucket's disk usage
S3 policy commands
weka s3 policy list
Print a list of the existing S3 IAM policies
weka s3 policy show
Show the details of an S3 IAM policy
weka s3 policy add
Add an S3 IAM policy
weka s3 policy remove
Remove an S3 IAM policy
weka s3 policy attach
Attach an S3 policy to a user
weka s3 policy detach
Detach an S3 policy from a user
S3 service account commands. Should be run only with an S3 user role
weka s3 service-account list
Print a list of the user's S3 service accounts
weka s3 service-account show
Show the details of an S3 service account
weka s3 service-account add
Add an S3 service account
weka s3 service-account remove
Remove an S3 service account
S3 security token commands
weka s3 sts assume-role
Generate a temporary security token with an assumed role using existing user credentials
S3 log-level Commands
weka s3 log-level get
Show current S3 log level on container
weka [--help] [--build] [--version] [--legal]
-o, --output...
Specify which columns to output. May include any of the following: muted,type,count,title,description,action
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
--muted
List muted alerts alongside the unmuted ones
-h, --help
Show help message
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-J, --json
Format output as JSON
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-h, --help
Show help message
-o, --output...
Specify which columns to output. May include any of the following: type,title,action
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-o, --output...
Specify which columns to output. May include any of the following: host,health
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-h, --help
Show help message
-J, --json
Format output as JSON
-u, --unset
Remove the HTTP proxy setting
--proxy
HTTP(S) proxy to connect to the cloud through
--bytes-per-second
Maximum uploaded bytes per second
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-J, --json
Format output as JSON
-h, --help
Show help message
--profile
Name of the connection and authentication profile to use
--host-ips...
Management IP addresses; If empty, the hostnames will be resolved; If hosts are highly-available or mixed-networking, use IP set '++...+';
-h, --help
Show help message
-J, --json
Format output as JSON
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-f, --format
Specify in what format to output the result. Available options are: view
--container...
Only return the processes of these container IDs, if not specified the weka-processes for all the containers will be returned
-o, --output...
Specify which columns to output. May include any of the following: uid,id,containerId,slot,hostname,container,ips,status,software,release,role,mode,netmode,cpuId,core,socket,numa,cpuModel,memory,uptime,fdName,fdId,traceHistory,fencingReason,joinRejectReason,failureText,failure,failureTime,failureCode
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-b, --backends
Only return backend containers
-c, --clients
Only return client containers
-l, --leadership
Only return containers that are part of the cluster leadership
-L, --leader
Only return the cluster leader
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: id,leader,term,lastActiveTerm,state,council,uptime,leaderVersionSig,electableMode,sourceMembers,nonSourceMembers,fillLevel
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-o, --output...
Specify which columns to output. May include any of the following: uid,fd,active_drives,failed_drives,total_drives,removed_drives,containers,total_containers,drive_proces,total_drive_proces,compute_proces,total_compute_proces,capacity
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
--show-removed
Show drives that were removed from the cluster
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
--skip-resource-validation
Skip verifying that the cluster has enough RAM and SSD resources allocated for the hot-spare
-h, --help
Show help message
-J, --json
Format output as JSON
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--keep-external-containers
Keep external containers(S3, SMB, NFS) running
-f, --force
Force this action without further confirmation. This action will disrupt operation of all connected clients. To restore IO service run 'weka cluster start-io'.
-h, --help
Show help message
-f, --format
Specify in what format to output the result. Available options are: view
--container...
Only return the drives of these container IDs, if not specified, all drives are listed
-o, --output...
Specify which columns to output. May include any of the following: uid,id,uuid,host,hostname,node,path,size,status,stime,fdName,fdId,writable,used,nvkvused,attachment,vendor,firmware,serial,model,added,removed,block,remain,threshold,drive_status_message
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
--show-removed
Show drives that were removed from the cluster
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--skip-resource-validation
Skip verifying that the configured hot spare capacity will remain available after deactivating the drives
-f, --force
Force this action without further confirmation. This action may impact performance while the drive is phasing out.
-h, --help
Show help message
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: path,uuid
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
--force
Force formatting the drive for weka, avoiding all safety checks!
--allow-format-non-wekafs-drives
Allow reuse of drives formatted by another versions
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-f, --force
Force this action without further confirmation. To undo the removal, add the drive back and re-scan the drives on the host local to the drive.
-h, --help
Show help message
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-h, --help
Show help message
--qos-preferred-throughput
qos-preferred-throughput is the throughput that gets preferred state (NORMAL instead of LOW) in QoS.
--qos-max-ops
qos-max-ops is the maximum number of operations of any kind for the client
-h, --help
Show help message
--role...
Only list machines with specified roles. Possible roles: (format: 'backend', 'client', 'nfs', 'smb' or 's3')
-o, --output...
Specify which columns to output. May include any of the following: hostname,uid,ip,roles,status,up_since,cores,memory,drives,nodes,load,versions
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-J, --json
Format output as JSON
-h, --help
Show help message
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: uid,id,hostname,container,machineIdentifier,ips,status,software,release,mode,fd,fdName,fdType,fdId,cores,feCores,driveCores,coreIds,memory,bw,scrubberLimit,dedicated,autoRemove,leadership,failureText,failure,failureTime,failureCode,uptime,added,cloudProvider,availabilityZone,instanceType,instanceId,kernelName,kernelRelease,kernelVersion,platform
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-b, --backends
Only return backend containers
-c, --clients
Only return client containers
-l, --leadership
Only return containers that are part of the cluster leadership
-L, --leader
Only return the cluster leader
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
--info-type...
Specify what information to query: version
-h, --help
Show help message
-J, --json
Format output as JSON
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--profile
Name of the connection and authentication profile to use
--auto
Set this container to be a failure-domain of its own
-h, --help
Show help message
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--cores-ids...
Specify the ids of weka dedicated cores.
--no-frontends
Do not create any processes with a frontend role
--only-drives-cores
Create only processes with a drives role
--only-compute-cores
Create only processes with a compute role
--only-frontend-cores
Create only processes with a frontend role
--allow-mix-setting
Allow specified cores-ids even if there are running containers with AUTO cores-ids allocation on the same server.
-h, --help
Show help message
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
--stable
List the resources from the last successfull container boot
-h, --help
Show help message
-J, --json
Format output as JSON
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--all
Apply resources on all the containers in the cluster. This will cause all backend containers in the entire cluter to restart simultaneously!
-h, --help
Show help message
--skip-resource-validation
Skip verifying that the cluster will still have enough RAM and SSD resources after deactivating the containers
--all
Apply resources on all the containers in the cluster. This will cause all backend containers in the entire cluter to restart simultaneously!
-f, --force
Force this action without further confirmation. This action will restart the containers on the containers and cannot be undone.
-h, --help
Show help message
--no-wait
--skip-resource-validation
Skip verifying that the cluster will still have enough RAM and SSD resources after deactivating the containers
--skip-activate-drives
Do not activate the drives of the container
-h, --help
Show help message
--no-wait
--skip-resource-validation
Skip verifying that the cluster will still have enough RAM and SSD resources after deactivating the containers
--allow-unavailable
Allow the container to be unavailable while it is deactivated which skips setting its local resources
-h, --help
Show help message
-h, --help
Show help message
--profile
Name of the connection and authentication profile to use
--no-wait
Skip waiting for the container to be added to the cluster
-h, --help
Show help message
-J, --json
Format output as JSON
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-wait
Don't wait for the container removal to complete, return immediately
--no-unimprint
Don't remotely unimprint the container, just remove it from the cluster configuration
-h, --help
Show help message
--profile
Name of the connection and authentication profile to use
--force
When set, broute force reset
-h, --help
Show help message
-J, --json
Format output as JSON
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: uid,name,id,host,hostname,device,ips,netmask,gateway,cores,owner,vlan,netlabel
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
--label
The name of the switch or network group to which this network device is attached
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--ips...
IPs to be allocated to cores using the device. If not given - IPs may be set automatically according the interface's IPs, or taken from the default networking IPs pool (format: A.B.C.D-E.F.G.H or A.B.C.D-F.G.H or A.B.C.D-G.H or A.B.C.D-H)
-h, --help
Show help message
-J, --json
Format output as JSON
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-J, --json
Format output as JSON
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-J, --json
Format output as JSON
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-h, --help
Show help message
-o, --output...
Specify which columns to output. May include any of the following: uid,id,type,state,phase,progress,paused,desc,time
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-h, --help
Show help message
-h, --help
Show help message
-h, --help
Show help message
-J, --json
Format output as JSON
-h, --help
Show help message
-h, --help
Show help message
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--container-id...
Container IDs to collect diags from, can be used multiple times. This flag causes --clients to be ignored.
--clients
Collect diags from client hosts only (by default diags are only collected from backends)
--backends
Collect diags from backend hosts (to be used in combination with --clients to collect from all hosts)
-t, --tar
Create a TAR of all collected diags
-v, --verbose
Print results of all diags, including successful ones
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
-J, --json
Format output as JSON
--all
Delete all.
-h, --help
Show help message
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--container-id...
Container IDs to collect diags from, can be used multiple times. This flag causes --clients to be ignored.
--clients
Collect diags from client hosts only (by default diags are only collected from backends)
--backends
Collect diags from backend hosts (to be used in combination with --clients to collect from all hosts)
-h, --help
Show help message
-J, --json
Format output as JSON
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
-t, --type-list...
Filter events by type, can be used multiple times (use 'weka events list-types' to see available types)
-x, --exclude-type-list...
Remove events by type, can be used multiple times (use 'weka events list-types' to see available types)
-c, --category-list...
Include only events matches to the category_list. Category can be Events, Node, Raid, Drive, ObjectStorage, System, Resources, Clustering, Network, Filesystem, Upgrade, NFS, Config, Cloud, InterfaceGroup, Org, User, Alerts, Licensing, Custom, Kms, Smb, Traces, S3, Security, Agent or KDriver
-o, --output...
Specify which columns to output. May include any of the following: time,cloudTime,node,category,severity,type,entity,desc
-i, --show-internal
Show internal events
-l, --cloud-time
Sort by cloud time instead of local timestamp
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: time,category,severity,permission,type,entity,node,hash
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
--stem-mode
List stem mode events
--show-internal
Show internal events
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-c, --category...
List only the events that fall under one of the following categories: Events, Node, Raid, Drive, ObjectStorage, System, Resources, Clustering, Network, Filesystem, Upgrade, NFS, Config, Cloud, InterfaceGroup, Org, User, Alerts, Licensing, Custom, Kms, Smb, Traces, S3, Security, Agent or KDriver
-t, --type...
List only events of the specified types
-o, --output...
Specify which columns to output. May include any of the following: type,category,severity,description,format,permission,parameters,dedup,dedupParams
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
--show-internal
Show internal events
-h, --help
Show help message
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-h, --help
Show help message
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: uid,id,name,group,usedSSD,usedSSDD,usedSSDM,freeSSD,availableSSDM,availableSSD,usedTotal,usedTotalD,freeTotal,availableTotal,maxFiles,status,encrypted,stores,auth,thinProvisioned,thinProvisioningMinSSDBudget,thinProvisioningMaxSSDBudget,usedSSDWD,usedSSDRD,reductionRatio,pendingReduction,dataReduction,reducedProcessedSize,reducedSize
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
--capacities
Display all capacity columns
--force-fresh
Refresh the capacities to make sure they are most updated
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
--thin-provision-max-ssd
Thin provisioned maximum SSD capacity (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
--auth-required
Require the mounting user to be authenticated for mounting this filesystem. This flag is only effective in the root organization, users in non-root organizations must be authenticated to perform a mount operation. (format: 'yes' or 'no')
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--encrypted
Creates an encrypted filesystem
--data-reduction
Enable data reduction
-h, --help
Show help message
-J, --json
Format output as JSON
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--auth-required
Require the mounting user to be authenticated for mounting this filesystem. This flag is only effective in the root organization, users in non-root organizations must be authenticated to perform a mount operation. (format: 'yes' or 'no')
--additional-obs-bucket
Additional Object Store bucket
--snapshot-name
Downloaded snapshot name (default: uploaded name)
--access-point
Downloaded snapshot access point (default: uploaded access-point)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--skip-resource-validation
Skip verifying that the cluster has enough RAM and SSD resources allocated for the downloaded filesystem
-h, --help
Show help message
-J, --json
Format output as JSON
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--data-reduction
Enable data reduction
--auth-required
Require the mounting user to be authenticated for mounting this filesystem. This flag is only effective in the root organization, users in non-root organizations must be authenticated to perform a mount operation. (format: 'yes' or 'no')
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
--purge-from-obs
Delete filesystem's objects from the local writable Object Store, making all locally uploaded snapshots unusable
-f, --force
Force this action without further confirmation. This action DELETES ALL DATA in the filesystem and cannot be undone.
-h, --help
Show help message
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --force
Force this action without further confirmation. This action replaces all data in the filesystem with the content of the snapshot and cannot be undone.
-h, --help
Show help message
-J, --json
Format output as JSON
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-h, --help
Show help message
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: quotaId,path,used,dblk,mblk,soft,hard,usage,owner,grace_seconds,time_over_soft_limit,status
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
--all
Show all (not only exceeding) quotas
-q, --quick
Skip resolving inodes to paths
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: inodeId,path,soft,hard,owner,grace
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-o, --output...
Specify which columns to output. May include any of the following: uid,group,name,retention,demote
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-J, --json
Format output as JSON
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-h, --help
Show help message
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: uid,id,filesystem,name,access,writeable,created,local_upload_size,remote_upload_size,local_object_status,local_object_progress,local_object_locator,remote_object_status,remote_object_progress,remote_object_locator,removing,prefetched
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: id,name,access,writeable,created
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
--is-writable
Writable
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-J, --json
Format output as JSON
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-J, --json
Format output as JSON
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
-J, --json
Format output as JSON
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
-h, --help
Show help message
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--allow-non-chronological
Allow uploading snapshots to remote object-store in non-chronological order. This is not recommended, as it will incur high data overhead.
-h, --help
Show help message
-J, --json
Format output as JSON
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--allow-non-chronological
Allow downloading snapshots in non-chronological order. This is not recommended, as it will incur high data overhead.
--allow-divergence
Allow downloading snapshots which are not descendants of the last downloaded snapshot.
-h, --help
Show help message
-J, --json
Format output as JSON
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--profile
Name of the connection and authentication profile to use
-f, --force
Force this action without further confirmation. This action deletes all data stored by the snapshot and cannot be undone.
-h, --help
Show help message
-o, --output...
Specify which columns to output. May include any of the following: obsBucket,statusUpload,statusDownload,statusRemove,nodesDown,errors
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: path,type,size,ssdWrite,ssdRead,obsBytes,remoteBytes
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
--profile
Name of the connection and authentication profile to use
-v, --verbose
Verbose output, showing fetch requests as they are submitted
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--profile
Name of the connection and authentication profile to use
-v, --verbose
Verbose output, showing release requests as they are submitted
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: fsUid,fsName,bucketUid,bucketName,totalConsumedCapacity,UsedCapacity,reclaimable,reclaimableThreshold,reclaimableLowThreshold,reclaimableHighThreshold
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
--force-fresh
Refresh the capacities to make sure they are most updated
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: uid,obsId,obsName,id,name,site,statusUpload,statusDownload,statusRemove,nodesUp,nodesDown,nodesUnknown,errors,protocol,hostname,port,bucket,auth,region,access,secret,status,up,downloadBandwidth,uploadBandwidth,removeBandwidth,errorsTimeout,prefetch,downloads,uploads,removals,maxUploadExtents,maxUploadSize,enableUploadTags,stsOperationType,stsRoleArn,stsRoleSessionName,stsDuration
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
--auth-method
Authentication method. S3AuthMethod can be None, AWSSignature2 or AWSSignature4
--region
Name of the region we are assigned to work with (usually empty)
--access-key-id
Access Key ID for AWS Signature authentications
--secret-key
Secret Key for AWS Signature authentications
--protocol
One of: HTTP (default), HTTPS, HTTPS_UNVERIFIED
--obs-type
One of: AWS (default), AZURE
--bandwidth
Bandwidth limitation per core (Mbps) (format: 1..4294967295)
--download-bandwidth
Download bandwidth limitation per core (Mbps) (format: 1..4294967295)
--upload-bandwidth
Upload bandwidth limitation per core (Mbps) (format: 1..4294967295)
--remove-bandwidth
Remove bandwidth limitation per core (Mbps) (format: 1..4294967295)
--errors-timeout
If the Object Store bucket link is down for longer than this, all IOs that need data return with an error (format: duration between 1 minute and 15 minutes)
--prefetch-mib
How many MiB of data to prefetch when reading a whole MiB on object store (format: 0..600)
--max-concurrent-downloads
Maximum number of downloads we concurrently perform on this object store in a single IO node (format: 1..64)
--max-concurrent-uploads
Maximum number of uploads we concurrently perform on this object store in a single IO node (format: 1..64)
--max-concurrent-removals
Maximum number of removals we concurrently perform on this object store in a single IO node (format: 1..64)
--max-extents-in-data-blob
Maximum number of extents' data to upload to an object store data blob
--max-data-blob-size
Maximum size to upload to an object store data blob (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
--enable-upload-tags
Enable tagging of uploaded objects
--sts-operation-type
AWS STS operation type to use. Default: none (format: 'assume_role' or 'none')
--sts-role-arn
The Amazon Resource Name (ARN) of the role to assume. Mandatory when setting sts-operation to ASSUME_ROLE
--sts-role-session-name
An identifier for the assumed role session. Length constraints: Minimum length of 2, maximum length of 64.
owed characters: upper and lo
wer-case alphanumeric characters with no spaces.
--bucket
Name of the bucket we are assigned to work with
--auth-method
Authentication method. S3AuthMethod can be None, AWSSignature2 or AWSSignature4
--region
Name of the region we are assigned to work with (usually empty)
--access-key-id
Access Key ID for AWS Signature authentications
--secret-key
Secret Key for AWS Signature authentications
--bandwidth
Bandwidth limitation per core (Mbps) (format: 1..4294967295)
--download-bandwidth
Download bandwidth limitation per core (Mbps) (format: 1..4294967295)
--upload-bandwidth
Upload bandwidth limitation per core (Mbps) (format: 1..4294967295)
--remove-bandwidth
Remove bandwidth limitation per core (Mbps) (format: 1..4294967295)
--prefetch-mib
How many MiB of data to prefetch when reading a whole MiB on object store (format: 0..600)
--errors-timeout
If the Object Store bucket link is down for longer than this, all IOs that need data return with an error (format: duration between 1 minute and 15 minutes)
--max-concurrent-downloads
Maximum number of downloads we concurrently perform on this object store in a single IO node (format: 1..64)
--max-concurrent-uploads
Maximum number of uploads we concurrently perform on this object store in a single IO node (format: 1..64)
--max-concurrent-removals
Maximum number of removals we concurrently perform on this object store in a single IO node (format: 1..64)
--max-extents-in-data-blob
Maximum number of extents' data to upload to an object store data blob
--max-data-blob-size
Maximum size to upload to an object store data blob (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
--enable-upload-tags
Enable tagging of uploaded objects
--sts-operation-type
AWS STS operation type to use. Default: none (format: 'assume_role' or 'none')
--sts-role-arn
The Amazon Resource Name (ARN) of the role to assume. Mandatory when setting sts-operation to ASSUME_ROLE
--sts-role-session-name
An identifier for the assumed role session. Length constraints: Minimum length of 2, maximum length of 64.
owed characters: upper and lo
wer-case alphanumeric characters with no spaces.
-h, --help
Show help message
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-f, --force
Force this action without further confirmation. This process might take a while to complete and it cannot be aborted. The data will remain intact on the object store, and you can still use the uploaded snapshots for recovery.
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: guid,fsId,snapId,origFsId,fsName,snapName,accessPoint,totalMetaData,totalSize,ssdCapacity,totalCapacity,maxFiles,numGuids,compatibleVersion
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: node,obsBucket,key,type,execution,phase,previous,start,size,results,errors,lastHTTP,concurrency,inode
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: uid,id,name,site,bucketsCount,uploadBucketsUp,downloadBucketsUp,removeBucketsUp,protocol,hostname,port,auth,region,access,secret,downloadBandwidth,uploadBandwidth,remove3Bandwidth,downloads,uploads,removals,maxUploadExtents,maxUploadSize,enableUploadTags,maxUploadRam,stsOperationType,stsRoleArn,stsRoleSessionName,stsDuration
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
--region
Name of the region we are assigned to work with (usually empty)
--access-key-id
Access Key ID for AWS Signature authentications
--secret-key
Secret Key for AWS Signature authentications
--bandwidth
Bandwidth limitation per core (Mbps) (format: 1..4294967295)
--download-bandwidth
Download bandwidth limitation per core (Mbps) (format: 1..4294967295)
--upload-bandwidth
Upload bandwidth limitation per core (Mbps) (format: 1..4294967295)
--remove-bandwidth
Remove bandwidth limitation per core (Mbps) (format: 1..4294967295)
--max-concurrent-downloads
Maximum number of downloads we concurrently perform on this object store in a single IO node (format: 1..64)
--max-concurrent-uploads
Maximum number of uploads we concurrently perform on this object store in a single IO node (format: 1..64)
--max-concurrent-removals
Maximum number of removals we concurrently perform on this object store in a single IO node (format: 1..64)
--max-extents-in-data-blob
Maximum number of extents' data to upload to an object store data blob
--max-data-blob-size
Maximum size to upload to an object store data blob (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
--upload-memory-limit
Maximum RAM to allocate for concurrent uploads to this object store (per node) (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
--enable-upload-tags
Enable tagging of uploaded objects
--sts-operation-type
AWS STS operation type to use. Default: none (format: 'assume_role' or 'none')
--sts-role-arn
The Amazon Resource Name (ARN) of the role to assume. Mandatory when setting sts-operation to ASSUME_ROLE
--sts-role-session-name
An identifier for the assumed role session. Length constraints: Minimum length of 2, maximum length of 64.
owed characters: upper and lo
wer-case alphanumeric characters with no spaces.
-o, --output...
Specify which columns to output. May include any of the following: id,name,ssdReserve
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-J, --json
Format output as JSON
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
-h, --help
Show help message
-J, --json
Format output as JSON
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: uid,name,mask,gateway,type,status,ips,ports,allowManageGids
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: ip,host,port,group
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: name,mask,gateway,type,status,ips,ports,allowManageGids
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-f, --force
Force this action without further confirmation. This action may disrupt IO service for connected clients and can be undone by re-creating the interface group.
-h, --help
Show help message
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
--profile
Name of the connection and authentication profile to use
-f, --force
Force this action without further confirmation. This action may disrupt IO service for connected clients and can be undone by re-creating the IP range.
-h, --help
Show help message
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --force
Force this action without further confirmation. This action may disrupt IO service for connected clients and can be undone by re-adding the port.
-h, --help
Show help message
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-s, --collect-cluster-info
Collect cluster-related information. Warning: Use this flag on one host at a time to avoid straining the cluster.
-t, --tar
Create a TAR of all collected diags
-v, --verbose
Print results of all diags, including successful ones
-h, --help
Show help message
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
--no-frontends
Don't allocate frontend nodes
--only-drives-cores
Create only nodes with a drives role
--only-compute-cores
Create only nodes with a compute role
--only-frontend-cores
Create only nodes with a frontend role
--allow-mix-setting
Allow specified cores-ids even if there are running containers with AUTO cores-ids allocation on the same server.
-h, --help
Show help message
--vfs
The number of VFs to preallocate (default is all supported by NIC)
--ips...
IPs to be allocated to cores using the device. If not given - IPs may be set automatically according the interface's IPs, or taken from the default networking IPs pool (format: A.B.C.D-E.F.G.H or A.B.C.D-F.G.H or A.B.C.D-G.H or A.B.C.D-H)
-h, --help
Show help message
--bandwidth
bandwidth limitation per second (format: either "unlimited" or bandwidth per second in binary or decimal values: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
--failure-domain
Add this container to a named failure-domain. A failure-domain will be created if it doesn't exist yet. If not specified, an automatic failure domain will be assigned.
-t, --timeout
Join command timeout in seconds (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--container-id
Decide on a specific container-id for the container to join using
--base-port
The first port that will be used by the Weka container, out of a total of 100 ports.
--resources-path
Import the container's resources from a file (additional command-line flags specified will override the resources in the file)
--weka-version
Use the specified version to start the container in
-cores-ids, --core-ids...
Specify the ids of weka dedicated cores
--management-ips...
New IPs for the management nodes
--join-ips...
New IP:port pairs for the management processes. If no port is used the command will use the default Weka port
--net...
Network specification - /[ip]/[bits]/[gateway]. Or: 'udp' to enforce UDP and avoid an attempt of auto deduction
--disable
Should the container be created as disabled
--no-start
Do not start the container after its creation
--no-frontends
Don't allocate frontend nodes
--only-drives-cores
Create only nodes with a drives role
--only-compute-cores
Create only nodes with a compute role
--only-frontend-cores
Create only nodes with a frontend role
--allow-mix-setting
Allow specified cores-ids even if there are running containers with AUTO cores-ids allocation on the same server.
--dedicate
Set the host as weka dedicated
--force
Create a new container even if a container with the same name exists, disregarding all safety checks!
--ignore-used-ports
Allow container to start even if the required ports are used by other processes
-h, --help
Show help message
--allow-not-ready
Allow starting local upgrade while the container is not fully up
--dont-upgrade-agent
Don't upgrade the weka agent
--upgrade-dependents
Upgrade dependent containers
--all
Upgrade all containers
-h, --help
Show help message
-f, --fake
Causes everything to be done except for the actual system call
-v, --verbose
Verbose mode
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: uid,name,rules
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: name,rules
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-f, --force
Force this action without further confirmation. This action may disrupt IO service for connected NFS clients and can be undone by re-creating the client group.
-h, --help
Show help message
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: uid,filesystem,group,path,type,squash,auid,agid,obsdirect,manageGids,options,customOptions,privilegedPort,priority,supportedVersions
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
--anon-uid
Anonymous UID to be used instead of root when root squashing is enabled
--anon-gid
Anonymous GID to be used instead of root when root squashing is enabled
--obs-direct
Obs direct (format: 'on' or 'off')
--manage-gids
the list of group ids received from the client will be replaced by a list of group ids determined by an appropriate lookup on the server. NOTE - this only works with a interface group which allows manage-gids (format: 'on' or 'off')
--privileged-port
Privileged port (format: 'on' or 'off')
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--supported-versions...
A comma-separated list of supported NFS versions (format: 'v3' or 'v4')
-f, --force
Force this action without further confirmation. This action will affect all NFS users of this permission/export, Use it with caution and consult the Weka Customer Success team at need.
-h, --help
Show help message
--anon-uid
Anonymous UID to be used instead of root when root squashing is enabled
--anon-gid
Anonymous GID to be used instead of root when root squashing is enabled
--obs-direct
Obs direct (format: 'on' or 'off')
--manage-gids
the list of group ids received from the client will be replaced by a list of group ids determined by an appropriate lookup on the server. NOTE - this only works with a interface group which allows manage-gids (format: 'on' or 'off')
--custom-options
Custom export options
--privileged-port
Privileged port (format: 'on' or 'off')
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--supported-versions...
A comma-separated list of supported NFS versions (format: 'v3' or 'v4')
-h, --help
Show help message
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --force
Force this action without further confirmation. This action may disrupt IO service for connected NFS clients and can be undone by re-creating the filesystem permission.
-h, --help
Show help message
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: uid,name,mask,gateway,type,status,ips,ports,allowManageGids
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: ip,host,port,group
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: name,mask,gateway,type,status,ips,ports,allowManageGids
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-f, --force
Force this action without further confirmation. This action may disrupt IO service for connected NFS clients and can be undone by re-creating the interface group.
-h, --help
Show help message
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
--profile
Name of the connection and authentication profile to use
-f, --force
Force this action without further confirmation. This action may disrupt IO service for connected NFS clients and can be undone by re-creating the IP range.
-h, --help
Show help message
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --force
Force this action without further confirmation. This action may disrupt IO service for connected NFS clients and can be undone by re-adding the port.
-h, --help
Show help message
--nfs-hosts...
Only return these host IDs (pass weka's host id as a number). All hosts as default
-o, --output...
Specify which columns to output. May include any of the following: host,debugLevel
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
--nfs-hosts...
Hosts to set debug level (pass weka's host id as a number). All hosts as default
-h, --help
Show help message
--profile
Name of the connection and authentication profile to use
--default-supported-versions...
A comma-separated list of the default supported NFS versions for new permissions (format: 'v3' or 'v4')
-h, --help
Show help message
-f, --force
Force this action without further confirmation. This may cause a temporary disruption in the NFS service.
-J, --json
Format output as JSON
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: hostid,client_ip,idle_time,num_v3_ops,num_v4_ops,num_v4_open_ops,num_v4_close_ops
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-o, --output...
Specify which columns to output. May include any of the following: uid,id,name,allocSSD,quotaSSD,allocTotal,quotaTotal
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-J, --json
Format output as JSON
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-J, --json
Format output as JSON
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-J, --json
Format output as JSON
-f, --force
Force this action without further confirmation. This action will DELETE ALL DATA stored in this organization's filesystems and cannot be undone.
-h, --help
Show help message
-J, --json
Format output as JSON
-J, --json
Format output as JSON
--client-key
auth: Path to the client key PEM file
--ca-cert
auth: Path to the CA certificate PEM file
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-h, --help
Show help message
-h, --help
Show help message
-J, --json
Format output as JSON
-h, --help
Show help message
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-J, --json
Format output as JSON
-h, --help
Show help message
-J, --json
Format output as JSON
-J, --json
Format output as JSON
-J, --json
Format output as JSON
-h, --help
Show help message
-J, --json
Format output as JSON
-h, --help
Show help message
-J, --json
Format output as JSON
-h, --help
Show help message
-f, --force
Force this action without further confirmation. This action may disrupt IO service for connected SMB clients.
-h, --help
Show help message
-f, --force
Force this action without further confirmation. This action may disrupt IO service for connected SMB clients.
-h, --help
Show help message
--smb-ips-pool...
IPs used as floating IPs for SMB to serve in a HA manner. Then should not be assigned to any host on the network
--smb-ips-range...
IPs used as floating IPs for SMB to serve in a HA manner. Then should not be assigned to any host on the network (format: A.B.C.D-E.F.G.H or A.B.C.D-F.G.H or A.B.C.D-G.H or A.B.C.D-H)
-h, --help
Show help message
--default-domain-mapping-to-id
The SMB default domain last id
--joined-domain-mapping-from-id
The joined domain first id
--joined-domain-mapping-to-id
The joined domain last id
--encryption
Encryption (format: 'enabled', 'disabled', 'desired' or 'required')
--smb-conf-extra
Extra smb configuration options
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--container-ids...
The containers that will serve via the SMB protocol (pass weka's container id as a number)
--smb-ips-pool...
IPs used as floating IPs for samba to server SMB in a HA manner. Then should not be assigned to any container on the network
--smb-ips-range...
IPs used as floating IPs for samba to server SMB in a HA manner. Then should not be assigned to any container on the network (format: A.B.C.D-E.F.G.H or A.B.C.D-F.G.H or A.B.C.D-G.H or A.B.C.D-H)
--smb
SMB Legacy cluster type
-h, --help
Show help message
--container-ids...
Hosts to set debug level (pass weka's host id as a number). All hosts as default
-h, --help
Show help message
-J, --json
Format output as JSON
-h, --help
Show help message
-o, --output...
Specify which columns to output. May include any of the following: id,domain,idmap,from,to
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --force
Force this action without further confirmation. This action may disrupt IO service for connected SMB clients and modify existing uids/gids.
-h, --help
Show help message
-J, --json
Format output as JSON
-f, --force
Force this action without further confirmation. This action may disrupt IO service for connected SMB clients and modify existing uids/gids.
-h, --help
Show help message
-J, --json
Format output as JSON
-o, --output...
Specify which columns to output. May include any of the following: id,mode,hostname
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-f, --force
Force this action without further confirmation. This action will delete all host access ips.
-h, --help
Show help message
-J, --json
Format output as JSON
--profile
Name of the connection and authentication profile to use
--ips...
ips to add
--hosts...
hosts to add
-f, --force
Force this action without further confirmation. This action may disrupt IO service for connected SMB clients.
-h, --help
Show help message
-J, --json
Format output as JSON
-f, --force
Force this action without further confirmation. This action may disrupt IO service for connected SMB clients.
-h, --help
Show help message
-J, --json
Format output as JSON
-o, --output...
Specify which columns to output. May include any of the following: id,share,filesystem,description,path,fmask,dmask,acls,options,additional,direct,encryption,validUsers,invalidUsers,readonlyUsers,readwriteUsers,readonlyShare,allowGuestAccess,hidden
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
--encryption
Encryption (format: 'cluster_default', 'desired' or 'required')
--read-only
Mount as read-only (format: 'on' or 'off')
--allow-guest-access
Allow Guest Access (format: 'on' or 'off')
--hidden
Hidden (format: 'on' or 'off')
-h, --help
Show help message
-o, --output...
Specify which columns to output. May include any of the following: uid,id,share,readonly,validusers,invalidusers,readonlyusers,readwriteusers
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-J, --json
Format output as JSON
--profile
Name of the connection and authentication profile to use
--users...
Users to add
-h, --help
Show help message
-J, --json
Format output as JSON
--profile
Name of the connection and authentication profile to use
--users...
Users to remove
-h, --help
Show help message
-J, --json
Format output as JSON
-o, --mount-option
Option to pass to the mount command when mounting weka. NOTE - This parameter is DANGEROUS, use with caution. Incorrect usage may lead to DATA LOSS.
--acl
Enable Windows ACLs on the share. Will also be translated (as possible) to POSIX ACLs. (format: 'on' or 'off')
--obs-direct
Mount share in obs-direct mode (format: 'on' or 'off')
--encryption
Encryption (format: 'cluster_default', 'desired' or 'required')
--read-only
Mount share as read-only (format: 'on' or 'off')
--user-list-type
The list type to which users are added to (format: 'read_only', 'read_write', 'valid' or 'invalid')
--allow-guest-access
Allow guests to access the share (format: 'on' or 'off')
--hidden
Hidden (format: 'on' or 'off')
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--share-option...
Additional options to pass on to SMB. NOTE - This parameter is DANGEROUS, use with caution. Incorrect usage may lead to DATA LOSS.
--users...
Users to add
-f, --force
Force this action without further confirmation. This action will affect all SMB users of this share, Use it with caution and consult the Weka Customer Success team at need.
-h, --help
Show help message
-J, --json
Format output as JSON
-f, --force
Force this action without further confirmation. This action may disrupt IO service for connected SMB clients.
-h, --help
Show help message
-o, --output...
Specify which columns to output. May include any of the following: uid,id,share,mode,hostname
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
--profile
Name of the connection and authentication profile to use
-f, --force
Force this action without further confirmation. This action will delete all host access ips.
-h, --help
Show help message
-J, --json
Format output as JSON
--profile
Name of the connection and authentication profile to use
--ips...
ips to add
--hosts...
hosts to add
-h, --help
Show help message
-J, --json
Format output as JSON
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-J, --json
Format output as JSON
-J, --json
Format output as JSON
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--debug
Run the command in debug mode
-h, --help
Show help message
-J, --json
Format output as JSON
--profile
Name of the connection and authentication profile to use
--debug
Run the command in debug mode
-f, --force
Force to leave the domain. Use when Active Directory is unresponsive
-h, --help
Show help message
-J, --json
Format output as JSON
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
--category...
Retrieve only statistics of the specified categories
--stat...
Retrieve only the specified statistics
--process-ids...
Limit the report to the specified processes
--param...
For parameterized statistics, retrieve only the instantiations where the specified parameter is of the specified value. Multiple values can be supplied for the same key, e.g. '--param method:putBlocks --param method:initBlock'. (format: key:value)
-o, --output...
Specify which columns to output. May include any of the following: node,category,timestamp,stat,unit,value,containerId,container,hostname,roles
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
--accumulated
Show accumulated statistics, not rate statistics
--per-process
Do not aggregate statistics across processes
-Z, --no-zeros
Do not retrieve results where the value is 0
--show-internal
Show internal statistics
--skip-validations
Skip category/stat name validations
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: node,hostname,role,mode,writeps,writebps,wlatency,readps,readbps,rlatency,ops,cpu,l6recv,l6send,upload,download,rdmarecv,rdmasend
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--show-total
Show each column's sum of values in the real-time statistics output
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: category,clabel,identifier,description,label,type,unit,params,realted,permission,ntype,accumulate,histogram,histogramUnit
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
--show-internal
Show internal statistics
-h, --help
Show help message
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
--dry-run
Only test the command, don't affect the system
-h, --help
Show help message
-J, --json
Format output as JSON
-J, --json
Format output as JSON
-h, --help
Show help message
-J, --json
Format output as JSON
-J, --json
Format output as JSON
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
-J, --json
Format output as JSON
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
-r, --readonly
In case unmounting fails, try to remount read-only
-h, --help
Show help message
-J, --json
Format output as JSON
-o, --output...
Specify which columns to output. May include any of the following: uid,user,source,role,s3Policy,posix_uid,posix_gid
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-o, --output...
Specify which columns to output. May include any of the following: orgId,orgName,user,source,role
-h, --help
Show help message
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-J, --json
Format output as JSON
-h, --help
Show help message
-h, --help
Show help message
-J, --json
Format output as JSON
-h, --help
Show help message
-J, --json
Format output as JSON
-J, --json
Format output as JSON
group-id-attribute*
Group ID attribute
reader-username*
Reader username
reader-password*
Reader password
--cluster-admin-group
LDAP group of users that should get ClusterAdmin role (this role is only available for the root tenant to configure)
--org-admin-group
LDAP group of users that should get OrgAdmin role
--regular-group
LDAP group of users that should get Regular role
--readonly-group
LDAP group of users that should get ReadOnly role
--start-tls
Issue StartTLS after connecting (should not be used with ldaps://) (format: 'yes' or 'no')
--ignore-start-tls-failure
Ignore start TLS failure (format: 'yes' or 'no')
--server-timeout-secs
LDAP connection timeout in seconds
--protocol-version
LDAP protocol version
--user-revocation-attribute
User revocation attribute: If provided, updating this attribute in the LDAP server automatically revokes all user tokens.
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-J, --json
Format output as JSON
--regular-group
LDAP group of users that should get Regular role
--readonly-group
LDAP group of users that should get ReadOnly role
--start-tls
Issue StartTLS after connecting (should not be used with ldaps://) (format: 'yes' or 'no')
--ignore-start-tls-failure
Ignore start TLS failure (format: 'yes' or 'no')
--server-timeout-secs
LDAP connection timeout in seconds
--user-revocation-attribute
User revocation attribute: If provided, updating this attribute in the LDAP server automatically revokes all user tokens.
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-J, --json
Format output as JSON
--group-id-attribute
Group ID attribute
--reader-username
Reader username
--reader-password
Reader password
--cluster-admin-group
LDAP group of users that should get ClusterAdmin role (this role is only available for the root tenant to configure)
--org-admin-group
LDAP group of users that should get OrgAdmin role
--regular-group
LDAP group of users that should get Regular role
--readonly-group
LDAP group of users that should get ReadOnly role
--start-tls
Issue StartTLS after connecting (should not be used with ldaps://) (format: 'yes' or 'no')
--certificate
Certificate or certificate chain for the LDAP server
--ignore-start-tls-failure
Ignore certificate verification errors (format: 'yes' or 'no')
--server-timeout-secs
LDAP connection timeout in seconds
--protocol-version
LDAP protocol version
--user-revocation-attribute
User revocation attribute: If provided, updating this attribute in the LDAP server automatically revokes all user tokens.
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-J, --json
Format output as JSON
-J, --json
Format output as JSON
-h, --help
Show help message
-J, --json
Format output as JSON
-h, --help
Show help message
-J, --json
Format output as JSON
-h, --help
Show help message
-h, --help
Show help message
-J, --json
Format output as JSON
--anonymous-posix-uid
POSIX UID for anonymous users
--anonymous-posix-gid
POSIX GID for anonymous users
--domain
Virtual host-style comma seperated domains
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--container...
The containers that will serve via the S3 protocol (pass weka's container ID as a number)
--all-servers
Install S3 on all servers
-f, --force
Force this action without further confirmation. Be aware that this will impact all S3 buckets within the S3 service. Exercise caution and consult the WEKA Customer Success team if assistance is required.
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--container...
The containers that will serve via the S3 protocol
--all-servers
Install S3 on all servers
-f, --force
Force this action without further confirmation. Be aware that this will impact all S3 buckets within the S3 service. Exercise caution and consult the WEKA Customer Success team if assistance is required.
-h, --help
Show help message
troying the S3 cluster r
emoves the S3 service and its associated configuration, including IAM policies, buckets, and ILM rules. access will no longer be available for clients.
s operation does not aut
omatically delete the data stored within the buckets.
ever, internal users wit
h S3 roles will be permanently removed from the system..
-h, --help
Show help message
-J, --json
Format output as JSON
--auth-token
The webhook authentication token
-h, --help
Show help message
--verify
verification to apply configuration
-J, --json
Format output as JSON
-h, --help
Show help message
-h, --help
Show help message
-J, --json
Format output as JSON
--fs-id
file system id
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --force
Force when existing-path has quota
-h, --help
Show help message
-J, --json
Format output as JSON
-o, --output...
Specify which columns to output. May include any of the following: name,hard,used,path,fs
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
--unlink
unlinks the bucket, but leave the data directory in place
-h, --help
Show help message
-J, --json
Format output as JSON
-f, --force
Force this action without further confirmation. This action may disrupt IO service for connected S3 clients..
--profile
Name of the connection and authentication profile to use
--prefix
prefix
--tags
object tags
-h, --help
Show help message
-J, --json
Format output as JSON
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-J, --json
Format output as JSON
-h, --help
Show help message
-J, --json
Format output as JSON
-f, --force
Force this action without further confirmation. This action will delete the existing S3 bucket rules.
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: uid,id,expiry_days,prefix,tags
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-h, --help
Show help message
-J, --json
Format output as JSON
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-J, --json
Format output as JSON
-h, --help
Show help message
-J, --json
Format output as JSON
-h, --help
Show help message
-J, --json
Format output as JSON
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-J, --json
Format output as JSON
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-h, --help
Show help message
-o, --output...
Specify which columns to output. May include any of the following: name
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-h, --help
Show help message
-J, --json
Format output as JSON
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-J, --json
Format output as JSON
-h, --help
Show help message
-J, --json
Format output as JSON
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-J, --json
Format output as JSON
-h, --help
Show help message
-J, --json
Format output as JSON
-o, --output...
Specify which columns to output. May include any of the following: accessKey
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-h, --help
Show help message
-J, --json
Format output as JSON
-h, --help
Show help message
-J, --json
Format output as JSON
-h, --help
Show help message
-J, --json
Format output as JSON
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-J, --json
Format output as JSON
--container...
The containers that will change their log level severity
-o, --output...
Specify which columns to output. May include any of the following: host,stdout
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
--no-header
Don't show column headers when printing the output
-v, --verbose
Show all columns in output
-h, --help
Show help message
--build
Prints the CLI build number and exits
-v, --version
Prints the CLI version and exits
--legal
Prints software license information and exits
-h, --help
Show help message
--no-update
Don't update the locally installed containers
-h, --help
Show help message
-h, --help
Show help message
-h, --help
Show help message
--force
Force the action to actually happen
--ignore-wekafs-mounts
Proceed even with active wekafs mounts
--keep-files
Do not remove Weka version images and keep in installation directory
-h, --help
Show help message
-h, --help
Show help message
-h, --help
Show help message
-h, --help
Show help message
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
alert-type*
An alert-type to mute, use weka alerts types to list types
duration*
How long to mute this alert type for (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
alert-type*
An alert-type to unmute, use weka alerts types to list types
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
--cloud-url
The base url of the cloud service
--cloud-stats
Enable or disable uploading stats to the cloud (format: 'on' or 'off')
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-s, --set
Set a new proxy setting
bucket-name*
AWS bucket name
region*
AWS region
access-key-id*
AWS access key
secret-key*
AWS secret
--session-token
S3 session token
--bucket-prefix
S3 bucket prefix
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
--bytes-per-second
Maximum uploaded bytes per second
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
host-hostnames...
A list of hostname to be included in the new cluster
--admin-password
The password for the cluster admin user; will be set to the default password if not provided
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--cluster-name
Cluster name
--data-drives
Number of RAID data drives
--parity-drives
Number of RAID protection parity drives
--scrubber-bytes-per-sec
Rate of RAID scrubbing in units per second (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
process-ids...
Only return these processes IDs.
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
bucket-ids...
Only return these bucket IDs.
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
count...
The number of failure-domains
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--brutal-no-flush
Force stopping IO services immediately without graceful flushing of ongoing operations. Using this flag may cause data-loss if used without explicit guidance from WekaIO customer support.
uuids...
A list of drive IDs or UUIDs to list. If no ID is specified, all drives are listed.
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
container-ids...
A list of container ids to scan for drives
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
uuids...
A list of drive IDs or UUIDs to activate. If no ID is supplied, all inactive drives will be activated.
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
uuids...
A list of drive IDs or UUIDs to deactivate.
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
container-id*
The container the drive attached to (given by ids)
device-paths...
Device paths of the drives to add
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
uuids...
A list of drive UUIDs to remove. A UUID is a hex string formatted as 8-4-4-4-12 e.g. 'abcdef12-1234-abcd-1234-1234567890ab'
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
--qos-max-throughput
qos-max-throughput is the maximum throughput allowed for the client for either receive or transmit traffic.
--qos-preferred-throughput
qos-preferred-throughput is the throughput that gets preferred state (NORMAL instead of LOW) in QoS.
--qos-max-ops
qos-max-ops is the maximum number of operations of any kind for the client
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-J, --json
Format output as JSON
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--qos-max-throughput
qos-max-throughput is the maximum throughput allowed for the client for either receive or transmit traffic.
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
uid*
The Server UID
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
container-ids...
Only return these container IDs.
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
hostnames...
A list of containers to query (by hostnames or IPs). If no container is supplied, all of the cluster containers will be queried
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
container-id*
Container ID as shown in weka cluster container
--name
Add this container to a named failure-domain. A failure-domain will be created if it doesn't exist yet.
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
container-id*
Container ID as shown in weka cluster container
on*
Set the container as weka dedicated, off unsets container as weka dedicated (format: 'on' or 'off')
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
container-id*
Container ID as shown in weka cluster container
bandwidth*
New bandwidth limitation per second (format: either "unlimited" or bandwidth per second in binary or decimal values: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
container-id*
Container ID as shown in weka cluster container
cores*
Number of CPU cores dedicated to weka - If set to 0 - no drive could be added to this container
--frontend-dedicated-cores
Number of cores dedicated to weka frontend (out of the total )
--drives-dedicated-cores
Number of cores dedicated to weka drives (out of the total )
--compute-dedicated-cores
Number of cores dedicated to weka compute (out of the total )
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
container-id*
Container ID as shown in weka cluster container
memory*
Memory dedicated to weka in bytes, set to 0 to let the system decide (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
container-id*
Container ID as shown in weka cluster container
auto-remove-timeout*
Minimum value is 60, use 0 to disable automatic removal
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
container-id*
Container ID as shown in weka cluster container
management-ips...
New IPs for the management processes
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
container-id*
Container ID as shown in weka cluster container
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
container-ids...
A list of container ids for which to apply resources config
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
container-ids...
A list of container ids for which to apply resources config
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
container-ids...
A list of container ids to activate
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
container-ids...
A list of container ids to deactivate
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
container-ids...
A list of container ids for which to clear the last failure
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
hostname*
Management network hostname
--ip
Management IP; If empty, the hostname is resolved; If container is highly-available or mixed-networking, use IP set '++...+';
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
container-id*
The container ID of the container to be removed
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
guid*
The cluster GUID
container-names-or-ips...
A list of containers (given by container-name or IP)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
container-ids...
Container IDs to get the network devices of
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
container-id*
The container's id
device*
Network device pci-slot/mac-address/interface-name(s)
--ips-type
IPs type: POOL: IPs from the default data networking IP pool would be used, USER: configured by the user (format: 'pool' or 'user')
--gateway
Default gateway IP. In AWS this value is auto-detected, otherwise the default data networking gateway will be used.
--netmask
Netmask in bits number. In AWS this value is auto-detected, otherwise the default data networking netmask will be used.
--name
If empty, a name will be auto generated.
container-id*
The container's id
name*
Net device name, e.g. container0net0
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
--range
IP range (format: A.B.C.D-E.F.G.H or A.B.C.D-F.G.H or A.B.C.D-G.H or A.B.C.D-H)
--gateway
Default gateway IP
--netmask-bits
Subnet mask bits (0..32)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--range
IP range (format: A.B.C.D-E.F.G.H or A.B.C.D-F.G.H or A.B.C.D-G.H or A.B.C.D-H)
--gateway
Default gateway IP
--netmask-bits
Subnet mask bits (0..32)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
plan-id*
Plan ID connected to a payment method
secret-key*
Secret key of the payment plan
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
license*
The new license to set to the system
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
task-id*
Id of the task to pause
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
task-id*
Id of the task to resume
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
task-id*
Id of the task to abort
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
--cpu-limit
Percent of the CPU resources to dedicate to background tasks
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
version-name*
The version to set
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-h, --help
Show help message
-i, --id
Optional ID for this dump, if not specified a random ID is generated
-m, --timeout
How long to wait when downloading diags from all hosts. Default is 10 minutes, 0 means indefinite (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-d, --output-dir
Directory to save the diags dump to, default: /opt/weka/diags
-c, --core-limit
Limit to processing this number of core dumps, if found (default: 1)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
id...
ID of the dump to show or a path to the diags dump. If not specified a list of all collected diags is shown.
-v, --verbose
Print results of all diags, including successful ones
-h, --help
Show help message
id...
ID of the diags to cancel. Must be specified unless the all option is set.
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-m, --timeout
How long to wait for diags to upload. Default is 10 minutes, 0 means indefinite (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-c, --core-limit
Limit to processing this number of core dumps, if found (default: 1)
--dump-id
ID of an existing dump to upload. This dump ID has to exist on this local server. If an ID is not specified, a new dump is created.
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-n, --num-results
Get up to this number of events, default: 50
--start-time
Include events occurred in this time point and later (format: 5m, -5m, -1d, -1w, 1:00, 01:00, 18:30, 18:30:07, 2018-12-31 10:00, 2018/12/31 10:00, 2018-12-31T10:00, 2019-Nov-17 11:11:00.309, 9:15Z, 10:00+2:00)
--end-time
Include events occurred not later then this time point (format: 5m, -5m, -1d, -1w, 1:00, 01:00, 18:30, 18:30:07, 2018-12-31 10:00, 2018/12/31 10:00, 2018-12-31T10:00, 2019-Nov-17 11:11:00.309, 9:15Z, 10:00+2:00)
--severity
Include event with equal and higher severity, default: INFO (format: 'debug', 'info', 'warning', 'minor', 'major' or 'critical')
-d, --direction
Fetch events from the first available event (forward) or the latest created event (backward), default: backward (format: 'forward' or 'backward')
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
--start-time
Include events occurred in this time point and later (format: 5m, -5m, -1d, -1w, 1:00, 01:00, 18:30, 18:30:07, 2018-12-31 10:00, 2018/12/31 10:00, 2018-12-31T10:00, 2019-Nov-17 11:11:00.309, 9:15Z, 10:00+2:00)
--end-time
Include events occurred not later then this time point (format: 5m, -5m, -1d, -1w, 1:00, 01:00, 18:30, 18:30:07, 2018-12-31 10:00, 2018/12/31 10:00, 2018-12-31T10:00, 2019-Nov-17 11:11:00.309, 9:15Z, 10:00+2:00)
--next
Token for the next page of events
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
message*
User defined text to trigger as the events parameter
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--name
Filesystem name
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
name*
Filesystem name
group-name*
Group name
total-capacity*
Total capacity (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
--obs-name
Object Store bucket name. Mandatory for tiered filesystems
--ssd-capacity
SSD capacity (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
--thin-provision-min-ssd
Thin provisioned minimum SSD capacity (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
name*
Filesystem name
group-name*
Group name
total-capacity*
Total capacity (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
ssd-capacity*
SSD capacity (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
obs-bucket*
Object Store bucket
locator*
Locator
name*
Filesystem name
--new-name
New name
--total-capacity
Total capacity (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
--ssd-capacity
SSD capacity (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
--thin-provision-min-ssd
Thin provision minimum SSD capacity (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
--thin-provision-max-ssd
Thin provision maximum SSD capacity (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
name*
Filesystem name
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
file-system*
The name of the Filesystem to be restored
source-name*
The name of the source snapshot
--preserved-overwritten-snapshot-name
Name of a snapshot to create with the old content of the filesystem
--preserved-overwritten-snapshot-access-point
Access point of the preserved overwritten snapshot
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-h, --help
Show help message
path*
Path in the filesystem
--soft
Soft limit for the directory, or 0 for unlimited (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
--hard
Hard limit for the directory, or 0 for unlimited (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
--grace
Soft limit grace period (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--owner
Quota owner (e.g., email)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
path*
Path in the filesystem
--soft
Soft limit for the directory (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
--hard
Hard limit for the directory (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
--grace
Soft limit grace period (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--owner
Quota owner (e.g., email)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
path*
Path in the filesystem
--generation
Remove a specific generation of quota
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
path*
Path in the filesystem
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
fs-name
Filesystem name
--snap-name
Optional snapshot name
-p, --path
Show this path only
-u, --under
List under (and including) this path only
--over
Show only quotas over this percentage of usage (format: 0..100)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
fs-name
Filesystem name
--snap-name
Optional snapshot name
-p, --path
Show this path only
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
name*
The filesystem group name to be created
--target-ssd-retention
Period of time to keep an SSD copy of the data (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--start-demote
Period of time to wait before copying data to the Object Store (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
name*
The filesystem group name to be created
--new-name
Updated name of the specified filesystem group
--target-ssd-retention
Period of time to keep an SSD copy of the data (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--start-demote
Period of time to wait before copying data to the Object Store (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
name*
The name of the filesystem group to be deleted
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--file-system
Filesystem name
--name
Snapshot name
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
file-system*
Source Filesystem name
name*
Target Snapshot name
--access-point
Access point
--source-snapshot
Source snapshot
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
file-system*
Source Filesystem name
source-name*
Source snapshot name
destination-name*
Destination snapshot name
--preserved-overwritten-snapshot-name
Name of a snapshot to create with the old content of the destination
--preserved-overwritten-snapshot-access-point
Access point of the preserved overwritten snapshot
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
file-system*
Source Filesystem name
name*
Snapshot name
--new-name
Updated snapshot name
--access-point
Access point
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
access-point-naming-convention*
access point naming configuration (format: 'date' or 'name')
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
file-system*
Filesystem name
snapshot*
Snapshot name
--site
The site of the Object Store to upload to (format: 'local' or 'remote')
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
file-system*
Filesystem name
locator*
Locator
--name
Snapshot name (default: uploaded name)
--access-point
Access point (default: uploaded access point)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
file-system*
Source Filesystem name
name*
Snapshot name
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
path*
Path to get information about
paths...
Extra paths to get information about
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
path...
A file path to fetch to SSD storage. Multiple paths can be passed, e.g. `find ...
--non-existing
Behavior for non-existing files (default: error) (format: 'error', 'warn' or 'ignore')
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
path...
A file path to release from SSD storage. Multiple paths can be passed, e.g. `find ...
--non-existing
Behavior for non-existing files (default: error) (format: 'error', 'warn' or 'ignore')
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--filesystem
Filesystem name
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--obs-name
Name of the Object Store
--name
Name of the Object Store bucket
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
name*
Name of the Object Store bucket
--site
The site of the Object Store, default: local (format: 'local' or 'remote')
--obs-name
Name of the Object Store to associate this new bucket to
--hostname
Hostname (or IP) of the entrypoint to the storage
--port
Port of the entrypoint to S3 (single Accesser or Load-Balancer)
--bucket
Name of the bucket we are assigned to work with
name*
Name of the Object Store bucket
--new-name
New name
--new-obs-name
New Object Store name
--hostname
Hostname (or IP) of the entrypoint to the storage
--port
Port of the entrypoint to S3 (single Accesser or Load-Balancer)
--protocol
One of: HTTP (default), HTTPS, HTTPS_UNVERIFIED
name*
Name of the Object Store bucket
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
fs-name*
Name of the Filesystem
obs-name*
Name of the Object Store bucket to attach
--mode
The operation mode for the Object Store bucket (format: 'writable' or 'remote')
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
fs-name*
Name of the Filesystem
obs-name*
Name of the Object Store bucket to detach
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-h, --help
Show help message
name*
Name of the Object Store bucket
--locator
Locator
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
name
Name of the Object Store bucket
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--name
Name of the Object Store
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
name*
Name of the Object Store
--new-name
New name
--hostname
Hostname (or IP) of the entrypoint to the bucket
--port
Port of the entrypoint to S3 (single Accesser or Load-Balancer)
--protocol
One of: HTTP (default), HTTPS, HTTPS_UNVERIFIED
--auth-method
Authentication method. S3AuthMethod can be None, AWSSignature2 or AWSSignature4
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
ssd-capacity*
SSD capacity to reserve (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
--org
Organization name or ID
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--org
Organization name or ID
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--name
Group name
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--name
Group name
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
name*
Interface group name
type*
Group type
--subnet
subnet mask in the 255.255.0.0 format
--gateway
gateway ip
--allow-manage-gids
Allow to use manage-gids in exports. With manage-gids, the list of group ids received from the client will be replaced by a list of group ids determined by an appropriate lookup on the server (format: 'on' or 'off')
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
name*
Interface group name
--subnet
subnet mask in the 255.255.0.0 format
--gateway
gateway ip
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
name*
Interface group name
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
name*
Interface group name
ips*
IP range (format: A.B.C.D-E.F.G.H or A.B.C.D-F.G.H or A.B.C.D-G.H or A.B.C.D-H)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
name*
Interface group name
ips*
IP range (format: A.B.C.D-E.F.G.H or A.B.C.D-F.G.H or A.B.C.D-G.H or A.B.C.D-H)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-h, --help
Show help message
name*
Interface group name
server-id*
Server ID on which the port resides
port*
Port's device. (e.g. eth1)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
name*
Interface group name
server-id*
Server ID on which the port resides
port*
Port's device. (e.g. eth1)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-h, --help
Show help message
--no-update
Don't update the locally installed containers
-h, --help
Show help message
-i, --id
A unique identifier for this dump
-d, --output-dir
Directory to save the diags dump to, default: /opt/weka/diags
-c, --core-dump-limit
Limit to processing this number of core dumps, if found (default: 1)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--path
Path to where local events are stored
--container-name
Name of the container whose events will be collected (default default)
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: time,uuid,category,severity,permission,type,entity,node,parameters,hash
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-f, --format
Specify in what format to output the result. Available options are: view
-o, --output...
Specify which columns to output. May include any of the following: name,state,running,disabled,uptime,monitoring,persistent,port,pid,status,versionName,failureText,failure,failureTime,upgradeState
-s, --sort...
Specify which column(s) to take into account when sorting the output. May include a '+' or '-' before the column name to sort in ascending or descending order respectively. Usage: [+
-F, --filter...
Specify what values to filter by in a specific column. Usage: column1=val1[,column2=val2[,..]]
-h, --help
Show help message
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
containers...
The containers to remove
--all
Remove all containers
-f, --force
Force this action without further confirmation. This would delete all data associated with the container(s) and can potentially lose all data in the cluster.
-h, --help
Show help message
container...
The container to start
-w, --wait-time
How long to wait for the container to start (default: 15m) (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-t, --type...
The container types to start
-d, --start-and-enable-dependent
Start and enable dependent containers even if we state container by name
-h, --help
Show help message
container...
The container to stop
--reason
The reason weka was stopped, will be presented to the user during 'weka status'
-t, --type...
The container types to stop
-f, --force
Stop containers even if there are active mounts
-d, --stop-and-disable-dependent
Implictly stop and disable dependent containers even if we state container by name
-h, --help
Show help message
container...
The container to restart
-w, --wait-time
How long to wait for the container to start (default: 15m) (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-t, --type...
The container types to restart
-h, --help
Show help message
container...
The container to display it's status
-t, --type...
The container types to show
-v, --verbose
Verbose mode
-h, --help
Show help message
-J, --json
Format output as JSON
container...
The container to enable
-t, --type...
The container types to enable
-h, --help
Show help message
container...
The container to disable
-t, --type...
The container types to disable
-h, --help
Show help message
enabled*
Whether monitoring should be on or off (format: 'on' or 'off')
container...
The container to disable
-t, --type...
The container types to disable
-h, --help
Show help message
-C, --container
The container to run in
--in
The container version to run the command in
-e, --environment...
Environment variable to add
-h, --help
Show help message
version-name...
The versions to remove
-C, --container
The container to run in
--clean-unused
Delete all container data directories for versions which aren't the current set version
-f, --force
Force this action without further confirmation. This action is destructive and can potentially lose all data in the cluster.
-h, --help
Show help message
-C, --container
The container name
--stable
List the resources from the last successful container boot
-h, --help
Show help message
-J, --json
Format output as JSON
-R, --raw-units
Print values in raw units (bytes, seconds, etc.). When not set, sizes are printed in human-readable format, e.g 1KiB 234MiB 2GiB.
-U, --UTC
Print times in UTC. When not set, times are converted to the local time of this host.
path*
Path of file to import resources from
-C, --container
The container name
--with-identifiers
Import net device unique identifiers
-h, --help
Show help message
-f, --force
Force this action without further confirmation. This action will override any resource changes that have not been applied, and cannot be undone.
path*
Path to export resources
-C, --container
The container name
--staging
List the resources from the currently staged resources that were not yet applied
--stable
List the resources from the currently stable resources, which are the last known good resources
-h, --help
Show help message
-C, --container
The container name
-h, --help
Show help message
-C, --container
The container name
-h, --help
Show help message
-f, --force
Force this action without further confirmation. This action will restart the container on this host and cannot be undone.
cores*
Number of CPU cores dedicated to weka - If set to 0 - no drive could be added to this host
-C, --container
The container name
--frontend-dedicated-cores
Number of cores dedicated to weka frontend (out of the total )
--drives-dedicated-cores
Number of cores dedicated to weka drives (out of the total )
--compute-dedicated-cores
Number of cores dedicated to weka compute (out of the total )
-cores-ids, --core-ids...
Specify the ids of weka dedicated cores
base-port*
The first port that will be used by the Weka container, out of a total of 100 ports.
-C, --container
The container name
-h, --help
Show help message
memory*
Memory dedicated to weka in bytes, set to 0 to let the system decide (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
-C, --container
The container name
-h, --help
Show help message
on*
Set the host as weka dedicated, off unsets host as weka dedicated (format: 'on' or 'off')
-C, --container
The container name
-h, --help
Show help message
bandwidth*
New bandwidth limitation per second (format: either "unlimited" or bandwidth per second in binary or decimal values: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
-C, --container
The container name
-h, --help
Show help message
management-ips...
New IPs for the management nodes
-C, --container
The container name
-h, --help
Show help message
management-ips...
New IP:port pairs for the management processes. If no port is used the command will use the default Weka port
-C, --container
The container name
-h, --help
Show help message
-C, --container
The container name
--name
Add this host to a named failure-domain. A failure-domain will be created if it doesn't exist yet.
--auto
Set this host to be a failure-domain of its own
-h, --help
Show help message
-C, --container
The container name
--stable
List the resources from the last successful container boot
-h, --help
Show help message
-J, --json
Format output as JSON
device*
Network device pci-slot/mac-address/interface-name(s)
-C, --container
The container name
--gateway
Default gateway IP. In AWS this value is auto-detected, otherwise the default data networking gateway will be used.
--netmask
Netmask in bits number. In AWS this value is auto-detected, otherwise the default data networking netmask will be used.
--name
If empty, a name will be auto generated.
--label
The name of the switch or network group to which this network device is attached
name*
Net device name or identifier as appears in weka local resources net
-C, --container
The container name
-h, --help
Show help message
-h, --help
Show help message
-n, --name
The name to give the container
--disable
Should the container be created as disabled
--no-start
Do not start the container after its creation
-h, --help
Show help message
-n, --name
The name to give the container
--cores
Number of CPU cores dedicated to weka - If set to 0 - no drive could be added to this container
--frontend-dedicated-cores
Number of cores dedicated to weka frontend (out of the total )
--drives-dedicated-cores
Number of cores dedicated to weka drives (out of the total )
--compute-dedicated-cores
Number of cores dedicated to weka compute (out of the total )
--memory
Memory dedicated to weka in bytes, set to 0 to let the system decide (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
-C, --container
The container name
-t, --target-version
Specify a specific target version for upgrade, instead of upgrading to the backend's version.
NOTE - This parameter is
DANGEROUS, use with caution. Incorrect usage may cause upgrade failure.
--upgrade-container-timeout
How long to wait for the container to upgrade. default is 120s (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--prepare-container-timeout
How long to wait for the container to prepare for upgrade. default is 120s (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--container-action-timeout
How long to wait for the container action to run before timing out and retrying 30s (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
source*
Source filesystem to mount
target*
Location to mount the source filesystem on
-o, --option
Mount options
-t, --type
The filesystem type
-n, --no-mtab
Mount without writing in /etc/mtab. This is necessary for example when /etc is on a read-only filesystem
-s, --sloppy
Tolerate sloppy mount options rather than failing
-h, --help
Show help message
-h, --help
Show help message
-h, --help
Show help message
name*
Group name
dns*
DNS rule with *?[] wildcards rule
--ip
IP with mask or CIDR rule, in the 1.1.1.1/255.255.0.0 or 1.1.1.1/16 format
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
name*
Group name
ip*
IP with CIDR rule, in the 1.1.1.1/255.255.0.0 or 1.1.1.1/16 format
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-h, --help
Show help message
name*
Group name
dns*
DNS rule with *?[] wildcards rule
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
name*
Group name
ip*
IP with CIDR rule, in the 1.1.1.1/255.255.0.0 or 1.1.1.1/16 format
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--name
Group name
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
name*
Group name
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
name*
Group name
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--filesystem
File system name
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
filesystem*
File system name
group*
Client group name
--path
path [default: /]
--permission-type
Permission type (format: 'ro' or 'rw')
--root-squashing
Root squashing (format: 'on' or 'off')
--squash
Permission squashing. NOTE - The option 'all' can be used only on interface groups with --allow-manage-gids=on (format: 'none', 'root' or 'all')
filesystem*
File system name
group*
Client group name
--path
path [default: /]
--permission-type
Permission type (format: 'ro' or 'rw')
--root-squashing
Root squashing (format: 'on' or 'off')
--squash
Permission squashing. NOTE - The option 'all' can be used only on interface groups with --allow-manage-gids=on (format: 'none', 'root' or 'all')
filesystem*
File system name
group*
Client group name
--path
path [default: /]
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--name
Group name
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--name
Group name
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
name*
Interface group name
type*
Group type. cli subnet type can be NFS
--subnet
subnet mask in the 255.255.0.0 format
--gateway
gateway ip
--allow-manage-gids
Allow to use manage-gids in exports. With manage-gids, the list of group ids received from the client will be replaced by a list of group ids determined by an appropriate lookup on the server (format: 'on' or 'off')
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
name*
Interface group name
--subnet
subnet mask in the 255.255.0.0 format
--gateway
gateway ip
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
name*
Interface group name
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
name*
Interface group name
ips*
IP range (format: A.B.C.D-E.F.G.H or A.B.C.D-F.G.H or A.B.C.D-G.H or A.B.C.D-H)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
name*
Interface group name
ips*
IP range (format: A.B.C.D-E.F.G.H or A.B.C.D-F.G.H or A.B.C.D-G.H or A.B.C.D-H)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-h, --help
Show help message
name*
Interface group name
server-id*
Server ID on which the port resides
port*
Port's device. (e.g. eth1)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
name*
Interface group name
server-id*
Server ID on which the port resides
port*
Port's device. (e.g. eth1)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
level*
The debug level, can be one of this options: EVENT, INFO, DEBUG, MID_DEBUG, FULL_DEBUG
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
--mountd-port
Configure the port number of the mountd service
--config-fs
NFSv4 config filesystem name, use "" to invalidate
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-h, --help
Show help message
--interface-group
interface-group-name
--container-id
container-id
--fip
floating-ip
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
name*
Organization name
username*
Username of organization admin
password
Password of organization admin
--ssd-quota
SSD quota (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
--total-quota
Total quota (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
org*
Current organization name or ID
new-name*
New organization name
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
org*
Organization name or ID
--ssd-quota
SSD quota (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
--total-quota
Total quota (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
org*
Organization name or ID
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
type*
KMS type, one of ["vault", "kmip"]
address*
Server address, usually a hostname:port or a URL
key-identifier*
Key to secure the filesystem keys with, e.g a key name (for Vault) or a key uid (for KMIP)
--token
auth: API token to access the KMS
--namespace
Namespace (Vault, optional)
--client-cert
auth: Path to the client certificate PEM file
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--allow-downgrade
Allows downgrading existing encrypted filesystems to local encryption instead of a KMS
--new-key-uid
(KMIP-only) Unique identifier for the new key to be used to wrap filesystem keys
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
path*
Path to output file
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--private-key
Path to TLS private key pem file
--certificate
Path to TLS certificate pem file
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-h, --help
Show help message
--failed-attempts
Number of consecutive failed logins before user account locks out
--lockout-duration
How long the account should be locked out for after failed logins (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-h, --help
Show help message
login-banner*
Text banner to be displayed before the user logs into the web UI
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-h, --help
Show help message
--cert-file
Path to certificate file
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
path*
Path to output file
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--container-ids...
The SMB containers being added (pass weka's host id as a number)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--container-ids...
The SMB containers being removed (pass weka's container id as a number)
-t, --timeout
Timeout (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--encryption
Encryption (format: 'enabled', 'disabled', 'desired' or 'required')
netbios-name*
The netbios name to give to the SMB cluster
domain*
The domain to join the SMB cluster to
config-fs-name*
SMB config filesystem name
--domain-netbios-name
The domain netbios name; If not given, the default will be the first part of the given domain name
--idmap-backend
The SMB domain backend type (rid, rfc2307, etc.). Note that rfc2307 requires uid/gid configuration on the Active Directory and is persistent, while rid does not require any Active Directory configuration but in case of range changes uids/gids could break.
--default-domain-mapping-from-id
The SMB default domain first id
level*
The debug level
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --force
Force this action without further confirmation. This action may disrupt IO service for connected SMB clients.
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
domain-name*
The name of the domain being added
from-id*
The first id
to-id*
The last id
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
trusteddomain-id*
The id of the domain to remove
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
mode*
allow/deny host access (format: 'allow' or 'deny')
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
mode*
allow/deny host access (format: 'allow' or 'deny')
-t, --timeout
Timeout in seconds (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
hosts...
hosts to remove
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
share-id*
The id of the share to update
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
share-id*
The id of the share
user-list-type*
The list type (format: 'read_only', 'read_write', 'valid' or 'invalid')
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
share-id*
The id of the share
user-list-type*
The list type (format: 'read_only', 'read_write', 'valid' or 'invalid')
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
share_id*
The id of the share being removed from
user-list-type*
The list type from which users are removed from (format: 'read_only', 'read_write', 'valid' or 'invalid')
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
share-name*
The name of the share being added
fs-name*
Filesystem name to share
--description
A description for SMB to show regarding the share
--internal-path
The path inside the filesystem to share
--file-create-mask
POSIX mode mask files will be created with. E.g. "0744"
--directory-create-mask
POSIX mode mask directories will be created with. E.g. "0755"
share-id*
The id of the share to remove
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
share-id*
The id of the share
mode*
allow/deny host access (format: 'allow' or 'deny')
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
share-id*
The id of the share
mode*
allow/deny host access (format: 'allow' or 'deny')
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
share_id*
The id of the share being removed from
hosts...
Hosts to add
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
username*
The name of the administrator user to join the domain using it
password
The administrator user password
--server
The domain controller server
--create-computer
Precreate the computer account in a specific OU
--extra-options
Consult with SMB 'net ads join' manual for extra options
-t, --timeout
Join command timeout in seconds (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
username*
The name of the administrator user to leave the domain using it
password
The administrator user password
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--start-time
Query for stats starting at this time (format: 5m, -5m, -1d, -1w, 1:00, 01:00, 18:30, 18:30:07, 2018-12-31 10:00, 2018/12/31 10:00, 2018-12-31T10:00, 2019-Nov-17 11:11:00.309, 9:15Z, 10:00+2:00)
--end-time
Query for stats up to this time point (format: 5m, -5m, -1d, -1w, 1:00, 01:00, 18:30, 18:30:07, 2018-12-31 10:00, 2018/12/31 10:00, 2018-12-31T10:00, 2019-Nov-17 11:11:00.309, 9:15Z, 10:00+2:00)
--interval
Period (in seconds) of time of the report
--resolution-secs
Length of each interval in the report period
--role
Limit the report to processes with the specified role
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
process-ids...
Only show realtime stats of these processes
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
name-or-category...
Filter by these names or categories
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
--days
Number of days to keep the statistics
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--dry-run
Only test the command, don't affect the system
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
target*
The target mount point to unmount
-t, --type
Indicate that the actions should only be taken on file systems of the specified type
-v, --verbose
Verbose mode
-n, --no-mtab
Unmount without writing in /etc/mtab
-l, --lazy-unmount
Detach the filesystem from the filesystem hierarchy now, and cleanup all references to the filesystem as soon as it is not busy anymore
-f, --force
Force unmount
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
username
User's username
password
User's password
-g, --org
Organization name or ID
-p, --path
The path where the login token will be saved (default: ~/.weka/auth-token.json). This path can also be specified using the WEKA_TOKEN environment variable. After logging-in, use the WEKA_TOKEN environment variable to specify where the login token is located.
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
password
New password: must contain at least 8 characters, and have at least one uppercase letter, one lowercase letter, and one number or special character. Typing special characters as arguments to this command might require escaping
--username
Username to change the password for, by default password is changed for the current user
--current-password
User's current password. Only necessary if changing current user's password
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
username*
Username of user to change the role of
role*
New role to set for the user (format: 'clusteradmin', 'orgadmin', 'regular', 'readonly' or 's3')
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
username*
Username of user to update
--posix-uid
POSIX UID for user (S3 Only)
--posix-gid
POSIX GID for user (S3 Only)
--role
New role to set for the user (format: 'clusteradmin', 'orgadmin', 'regular', 'readonly' or 's3')
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
username*
Username of the new user to create
role*
The role of the new user (format: 'clusteradmin', 'orgadmin', 'regular', 'readonly' or 's3')
password
Password for the new user: must contain at least 8 characters, and have at least one uppercase letter, one lowercase letter, and one number or special character. Typing special characters as arguments to this command might require escaping
--posix-uid
POSIX UID for user (S3 Only)
--posix-gid
POSIX GID for user (S3 Only)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
username*
User's name
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
username*
Username of user to revoke the tokens for
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--access-token-timeout
In how long should the access token expire (format: 3s, 2h, 4m, 1d, 1d5h, 1w)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
server-uri*
LDAP server URI ([ldap://]hostname[:port] or ldaps://hostname[:port])
base-dn*
Base DN
user-object-class*
User object class
user-id-attribute*
User ID attribute
group-object-class*
Group object class
group-membership-attribute*
Group membership attribute
server-uri*
LDAP server URI ([ldap://]hostname[:port] or ldaps://hostname[:port])
domain*
Domain
reader-username*
Reader username
reader-password*
Reader password
--cluster-admin-group
LDAP group of users that should get ClusterAdmin role (this role is only available for the root tenant to configure)
--org-admin-group
LDAP group of users that should get OrgAdmin role
--server-uri
LDAP server URI ([ldap://]hostname[:port] or ldaps://hostname[:port])
--base-dn
Base DN
--user-object-class
User object class
--user-id-attribute
User ID attribute
--group-object-class
Group object class
--group-membership-attribute
Group membership attribute
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --force
Force this action without further confirmation. This would prevent all LDAP users from logging-in until LDAP is enabled again.
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --force
Force this action without further confirmation. This would prevent all LDAP users from logging-in until LDAP is configured again.
-h, --help
Show help message
-J, --json
Format output as JSON
-h, --help
Show help message
version*
Version to download
--from...
Download from this distribution server (can be given multiple times). Otherwise distribution servers are taken from the $WEKA_DIST_SERVERS environment variable, the /etc/wekaio/dist-servers file, or /etc/wekaio/service.conf in that order of precedence
--set-current
Set the downloaded version as the current version. Will fail if any containers are currently running.
--no-progress-bar
Don't render download progress bar
--set-dist-servers
Override the default distribution servers upon successful download
-h, --help
Show help message
version*
The version name to use
-C, --container
The container to set the version for
--allow-running-containers
Do not verify that all containers are stopped
--default-only
Only set the default version used for creating containers
--agent-only
Only set the agent version
--set-dependent
Set the version for all containers depending on the specified container
-h, --help
Show help message
-C, --container
Get the version for a specific container
-h, --help
Show help message
version-name...
The versions to remove
--clean-unused
Delete all versions which aren't the current set version, or the version of any of the containers
-f, --force
Force this action without further confirmation. This action may be undone by re-downloading the version.
-h, --help
Show help message
version-name*
The version to prepare
containers...
The containers to prepare the version for
-h, --help
Show help message
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-v, --verbose
Verbose mode
default-fs-name*
S3 default filesystem name
config-fs-name*
S3 config filesystem name
--port
S3 service port
--key
S3 service key
--secret
S3 service secret
--max-buckets-limit
Limit the number of buckets that can be created
--key
S3 service key
--secret
S3 service secret
--port
S3 service port
--anonymous-posix-uid
POSIX UID for anonymous users
--anonymous-posix-gid
POSIX GID for anonymous users
--domain
Virtual host-style comma seperated domains. Empty to disable
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --force
Force this action without further confirmation.
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--endpoint
The webhook endpoint
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-h, --help
Show help message
container-ids...
The containers to add to the S3 cluster
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
container-ids...
The containers to remove from the S3 cluster
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-h, --help
Show help message
name*
bucket name to create
--policy
Set an existing S3 policy for a bucket
--policy-json
Get S3 policy for bucket in JSON format
--hard-quota
Hard limit for the directory (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
--existing-path
existing path
--fs-name
file system name
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
name*
bucket name to destroy
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
bucket*
S3 Bucket Name
expiry-days*
expiry days
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
bucket*
S3 Bucket Name
name*
rule name
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
bucket*
S3 Bucket Name
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
bucket*
S3 Bucket Name
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
bucket-name*
full path to bucket to get policy for
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
bucket-name*
full path to bucket to set policy for
bucket-policy*
Set an existing S3 policy. Available predefined options are: none
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
bucket-name*
full path to bucket to unset the policy for
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
bucket-name*
full path to bucket to get policy for
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
bucket-name*
Full path to bucket to set policy for
policy-file*
Path of the file containing the policy rules
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-h, --help
Show help message
name*
bucket name
hard-quota*
Hard limit for the directory (format: capacity in decimal or binary units: 1B, 1KB, 1MB, 1GB, 1TB, 1PB, 1EB, 1KiB, 1MiB, 1GiB, 1TiB, 1PiB, 1EiB)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
name*
bucket name
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
policy-name*
Policy name to show
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
policy-name*
The policy name
policy-file*
Path of the file containing the policy rules
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
policy*
Policy name to remove
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
policy*
Policy name to attach
user*
User name
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
user*
User name
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
access_key*
Access key of the service account to show
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
--policy-file
Policy file path
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
access_key*
Access key of the service account to remove
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-h, --help
Show help message
--access-key
Access key
--secret-key
Secret key
--policy-file
Policy file path
--duration
Duration, valid values: 15 minutes to 52 weeks and 1 day (format: 3s, 2h, 4m, 1d, 1d5h, 1w)
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-h, --help
Show help message
-H, --HOST
Specify the host. Alternatively, use the WEKA_HOST env variable
-P, --PORT
Specify the port. Alternatively, use the WEKA_PORT env variable
-C, --CONNECT-TIMEOUT
Timeout for connecting to cluster, default: 10 secs (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
-T, --TIMEOUT
Timeout to wait for response, default: 1 minute (format: 3s, 2h, 4m, 1d, 1d5h, 1w, infinite/unlimited)
--profile
Name of the connection and authentication profile to use
-f, --format
Specify in what format to output the result. Available options are: view
weka agent [--help]
weka agent install-agent [--no-update] [--help]
weka agent update-containers [--help]
weka agent supported-specs [--help]
weka agent uninstall [--force] [--ignore-wekafs-mounts] [--keep-files] [--help]
weka agent autocomplete [--help]
weka agent autocomplete install [--help]
weka agent autocomplete uninstall [--help]
weka agent autocomplete export [--help]
weka alerts [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--muted]
[--help]
[--no-header]
[--verbose]
weka alerts types [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka alerts mute <alert-type>
<duration>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka alerts unmute <alert-type>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka alerts describe [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--no-header]
[--verbose]
weka cloud [--help]
weka cloud status [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--no-header]
[--verbose]
weka cloud enable [--cloud-url cloud]
[--cloud-stats on/off]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka cloud disable [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka cloud proxy [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--set url]
[--help]
[--json]
[--unset]
weka cloud update <bucket-name>
<region>
<access-key-id>
<secret-key>
[--session-token token]
[--bucket-prefix prefix]
[--proxy proxy]
[--bytes-per-second bytes-per-second]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka cloud upload-rate [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka cloud upload-rate set [--bytes-per-second bps]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka cluster [--help]
weka cluster create [--admin-password admin-password]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--host-ips host-ips]...
[--help]
[--json]
[<host-hostnames>]...
weka cluster update [--cluster-name cluster-name]
[--data-drives data-drives]
[--parity-drives parity-drives]
[--scrubber-bytes-per-sec scrubber-bytes-per-sec]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka cluster process [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--container container]...
[--output output]...
[--sort sort]...
[--filter filter]...
[--backends]
[--clients]
[--leadership]
[--leader]
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
[<process-ids>]...
weka cluster bucket [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
[<bucket-ids>]...
weka cluster failure-domain [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--show-removed]
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
weka cluster hot-spare [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--skip-resource-validation]
[--help]
[--json]
[--raw-units]
[--UTC]
[<count>]...
weka cluster start-io [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka cluster stop-io [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--brutal-no-flush]
[--keep-external-containers]
[--force]
[--help]
weka cluster drive [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--container container]...
[--output output]...
[--sort sort]...
[--filter filter]...
[--show-removed]
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
[<uuids>]...
weka cluster drive scan [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--raw-units]
[--UTC]
[<container-ids>]...
weka cluster drive activate [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--raw-units]
[--UTC]
[<uuids>]...
weka cluster drive deactivate [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--skip-resource-validation]
[--force]
[--help]
[<uuids>]...
weka cluster drive add <container-id>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--force]
[--allow-format-non-wekafs-drives]
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
[<device-paths>]...
weka cluster drive remove [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--force]
[--help]
[<uuids>]...
weka cluster mount-defaults [--help]
weka cluster mount-defaults set [--qos-max-throughput qos-max-throughput]
[--qos-preferred-throughput qos-preferred-throughput]
[--qos-max-ops qos-max-ops]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka cluster mount-defaults show [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--json]
[--help]
weka cluster mount-defaults reset [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--qos-max-throughput]
[--qos-preferred-throughput]
[--qos-max-ops]
[--help]
weka cluster servers [--help]
weka cluster servers list [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--role role]...
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--no-header]
[--verbose]
weka cluster servers show <uid>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--json]
[--help]
weka cluster container [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--backends]
[--clients]
[--leadership]
[--leader]
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
[<container-ids>]...
weka cluster container info-hw [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--info-type info-type]...
[--help]
[--json]
[--raw-units]
[--UTC]
[<hostnames>]...
weka cluster container failure-domain <container-id>
[--name name]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--auto]
[--help]
weka cluster container dedicate <container-id>
<on>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka cluster container bandwidth <container-id>
<bandwidth>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka cluster container cores <container-id>
<cores>
[--frontend-dedicated-cores frontend-dedicated-cores]
[--drives-dedicated-cores drives-dedicated-cores]
[--compute-dedicated-cores compute-dedicated-cores]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--cores-ids cores-ids]...
[--no-frontends]
[--only-drives-cores]
[--only-compute-cores]
[--only-frontend-cores]
[--allow-mix-setting]
[--help]
weka cluster container memory <container-id>
<memory>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka cluster container auto-remove-timeout <container-id>
<auto-remove-timeout>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka cluster container management-ips <container-id>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[<management-ips>]...
weka cluster container resources <container-id>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--stable]
[--help]
[--json]
[--raw-units]
[--UTC]
weka cluster container restore [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--all]
[--help]
[<container-ids>]...
weka cluster container apply [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--skip-resource-validation]
[--all]
[--force]
[--help]
[<container-ids>]...
weka cluster container activate [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--no-wait]
[--skip-resource-validation]
[--skip-activate-drives]
[--help]
[<container-ids>]...
weka cluster container deactivate [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--no-wait]
[--skip-resource-validation]
[--allow-unavailable]
[--help]
[<container-ids>]...
weka cluster container clear-failure [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[<container-ids>]...
weka cluster container add <hostname>
[--ip ip]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--no-wait]
[--help]
[--json]
[--raw-units]
[--UTC]
weka cluster container remove <container-id>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--no-wait]
[--no-unimprint]
[--help]
weka cluster container factory-reset <guid>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--force]
[--help]
[--json]
[--raw-units]
[--UTC]
[<container-names-or-ips>]...
weka cluster container net [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
[<container-ids>]...
weka cluster container net add <container-id>
<device>
[--ips-type ips-type]
[--gateway gateway]
[--netmask netmask]
[--name name]
[--label label]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--ips ips]...
[--help]
[--json]
[--raw-units]
[--UTC]
weka cluster container net remove <container-id>
<name>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka cluster default-net [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
[--raw-units]
[--UTC]
weka cluster default-net set [--range range]
[--gateway gateway]
[--netmask-bits netmask-bits]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka cluster default-net update [--range range]
[--gateway gateway]
[--netmask-bits netmask-bits]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka cluster default-net reset [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka cluster license [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
[--raw-units]
[--UTC]
weka cluster license payg <plan-id>
<secret-key>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka cluster license reset [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka cluster license set <license>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka cluster task [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
weka cluster task pause <task-id>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka cluster task resume <task-id>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka cluster task abort <task-id>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka cluster task limits [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka cluster task limits set [--cpu-limit cpu-limit]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka cluster client-target-version [--help]
weka cluster client-target-version show [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka cluster client-target-version set <version-name>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka cluster client-target-version reset [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka diags [--help]
weka diags collect [--id id]
[--timeout timeout]
[--output-dir output-dir]
[--core-limit core-limit]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--container-id container-id]...
[--clients]
[--backends]
[--tar]
[--verbose]
[--help]
[--raw-units]
[--UTC]
[--json]
weka diags list [--verbose] [--help] [<id>]...
weka diags rm [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--all]
[--help]
[<id>]...
weka diags upload [--timeout timeout]
[--core-limit core-limit]
[--dump-id dump-id]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--container-id container-id]...
[--clients]
[--backends]
[--help]
[--json]
weka events [--num-results num-results]
[--start-time <start>]
[--end-time <end>]
[--severity severity]
[--direction direction]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--type-list type-list]...
[--exclude-type-list exclude-type-list]...
[--category-list category-list]...
[--output output]...
[--show-internal]
[--cloud-time]
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
weka events list-local [--start-time <start>]
[--end-time <end>]
[--next next]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--stem-mode]
[--show-internal]
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
weka events list-types [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--category category]...
[--type type]...
[--output output]...
[--sort sort]...
[--filter filter]...
[--show-internal]
[--help]
[--no-header]
[--verbose]
weka events trigger-event <message>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka fs [--name name]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--capacities]
[--force-fresh]
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
weka fs create <name>
<group-name>
<total-capacity>
[--obs-name obs-name]
[--ssd-capacity ssd-capacity]
[--thin-provision-min-ssd thin-provision-min-ssd]
[--thin-provision-max-ssd thin-provision-max-ssd]
[--auth-required auth-required]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--encrypted]
[--data-reduction]
[--help]
[--json]
[--raw-units]
[--UTC]
weka fs download <name>
<group-name>
<total-capacity>
<ssd-capacity>
<obs-bucket>
<locator>
[--auth-required auth-required]
[--additional-obs-bucket additional-obs-bucket]
[--snapshot-name snapshot-name]
[--access-point access-point]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--skip-resource-validation]
[--help]
[--json]
[--raw-units]
[--UTC]
weka fs update <name>
[--new-name new-name]
[--total-capacity total-capacity]
[--ssd-capacity ssd-capacity]
[--thin-provision-min-ssd thin-provision-min-ssd]
[--thin-provision-max-ssd thin-provision-max-ssd]
[--data-reduction data-reduction]
[--auth-required auth-required]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka fs delete <name>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--purge-from-obs]
[--force]
[--help]
weka fs restore <file-system>
<source-name>
[--preserved-overwritten-snapshot-name preserved-overwritten-snapshot-name]
[--preserved-overwritten-snapshot-access-point preserved-overwritten-snapshot-access-point]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--force]
[--help]
[--json]
weka fs quota [--help]
weka fs quota set <path>
[--soft soft]
[--hard hard]
[--grace grace]
[--owner owner]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka fs quota set-default <path>
[--soft soft]
[--hard hard]
[--grace grace]
[--owner owner]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka fs quota unset <path>
[--generation generation]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka fs quota unset-default <path>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka fs quota list [fs-name]
[--snap-name snap-name]
[--path path]
[--under under]
[--over over]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--all]
[--quick]
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
weka fs quota list-default [fs-name]
[--snap-name snap-name]
[--path path]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
weka fs group [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
weka fs group create <name>
[--target-ssd-retention target-ssd-retention]
[--start-demote start-demote]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
[--raw-units]
[--UTC]
weka fs group update <name>
[--new-name new-name]
[--target-ssd-retention target-ssd-retention]
[--start-demote start-demote]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka fs group delete <name>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka fs snapshot [--file-system file-system]
[--name name]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
weka fs snapshot create <file-system>
<name>
[--access-point access-point]
[--source-snapshot source-snapshot]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--is-writable]
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
weka fs snapshot copy <file-system>
<source-name>
<destination-name>
[--preserved-overwritten-snapshot-name preserved-overwritten-snapshot-name]
[--preserved-overwritten-snapshot-access-point preserved-overwritten-snapshot-access-point]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
[--raw-units]
[--UTC]
weka fs snapshot update <file-system>
<name>
[--new-name new-name]
[--access-point access-point]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
[--raw-units]
[--UTC]
weka fs snapshot access-point-naming-convention [--help]
weka fs snapshot access-point-naming-convention status [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
[--raw-units]
[--UTC]
weka fs snapshot access-point-naming-convention update <access-point-naming-convention>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka fs snapshot upload <file-system>
<snapshot>
[--site site]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--allow-non-chronological]
[--help]
[--json]
[--raw-units]
[--UTC]
weka fs snapshot download <file-system>
<locator>
[--name name]
[--access-point access-point]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--allow-non-chronological]
[--allow-divergence]
[--help]
[--json]
[--raw-units]
[--UTC]
weka fs snapshot delete <file-system>
<name>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--force]
[--help]
weka fs tier [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
weka fs tier location <path>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
[<paths>]...
weka fs tier fetch [--non-existing non-existing]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--verbose]
[--help]
[--raw-units]
[--UTC]
[<path>]...
weka fs tier release [--non-existing non-existing]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--verbose]
[--help]
[--raw-units]
[--UTC]
[<path>]...
weka fs tier capacity [--filesystem filesystem]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--force-fresh]
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
weka fs tier s3 [--obs-name obs-name]
[--name name]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
weka fs tier s3 add <name>
[--site site]
[--obs-name obs-name]
[--hostname hostname]
[--port port]
[--bucket bucket]
[--auth-method auth-method]
[--region region]
[--access-key-id access-key-id]
[--secret-key secret-key]
[--protocol protocol]
[--obs-type obs-type]
[--bandwidth bandwidth]
[--download-bandwidth download-bandwidth]
[--upload-bandwidth upload-bandwidth]
[--remove-bandwidth remove-bandwidth]
[--errors-timeout errors-timeout]
[--prefetch-mib prefetch-mib]
[--max-concurrent-downloads max-concurrent-downloads]
[--max-concurrent-uploads max-concurrent-uploads]
[--max-concurrent-removals max-concurrent-removals]
[--max-extents-in-data-blob max-extents-in-data-blob]
[--max-data-blob-size max-data-blob-size]
[--enable-upload-tags enable-upload-tags]
[--sts-operation-type sts-operation-type]
[--sts-role-arn sts-role-arn]
[--sts-role-session-name sts-role-session-name]
[--sts-session-duration sts-session-duration]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--dry-run]
[--skip-verification]
[--verbose-errors]
[--help]
[--json]
[--raw-units]
[--UTC]
weka fs tier s3 update <name>
[--new-name new-name]
[--new-obs-name new-obs-name]
[--hostname hostname]
[--port port]
[--protocol protocol]
[--bucket bucket]
[--auth-method auth-method]
[--region region]
[--access-key-id access-key-id]
[--secret-key secret-key]
[--bandwidth bandwidth]
[--download-bandwidth download-bandwidth]
[--upload-bandwidth upload-bandwidth]
[--remove-bandwidth remove-bandwidth]
[--prefetch-mib prefetch-mib]
[--errors-timeout errors-timeout]
[--max-concurrent-downloads max-concurrent-downloads]
[--max-concurrent-uploads max-concurrent-uploads]
[--max-concurrent-removals max-concurrent-removals]
[--max-extents-in-data-blob max-extents-in-data-blob]
[--max-data-blob-size max-data-blob-size]
[--enable-upload-tags enable-upload-tags]
[--sts-operation-type sts-operation-type]
[--sts-role-arn sts-role-arn]
[--sts-role-session-name sts-role-session-name]
[--sts-session-duration sts-session-duration]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--dry-run]
[--skip-verification]
[--verbose-errors]
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
weka fs tier s3 delete <name>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka fs tier s3 attach <fs-name>
<obs-name>
[--mode mode]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--raw-units]
[--UTC]
weka fs tier s3 detach <fs-name>
<obs-name>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--force]
weka fs tier s3 snapshot [--help]
weka fs tier s3 snapshot list <name>
[--locator locator]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
weka fs tier ops [name]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
weka fs tier obs [--name name]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
weka fs tier obs update <name>
[--new-name new-name]
[--hostname hostname]
[--port port]
[--protocol protocol]
[--auth-method auth-method]
[--region region]
[--access-key-id access-key-id]
[--secret-key secret-key]
[--bandwidth bandwidth]
[--download-bandwidth download-bandwidth]
[--upload-bandwidth upload-bandwidth]
[--remove-bandwidth remove-bandwidth]
[--max-concurrent-downloads max-concurrent-downloads]
[--max-concurrent-uploads max-concurrent-uploads]
[--max-concurrent-removals max-concurrent-removals]
[--max-extents-in-data-blob max-extents-in-data-blob]
[--max-data-blob-size max-data-blob-size]
[--upload-memory-limit upload-memory-limit]
[--enable-upload-tags enable-upload-tags]
[--sts-operation-type sts-operation-type]
[--sts-role-arn sts-role-arn]
[--sts-role-session-name sts-role-session-name]
[--sts-session-duration sts-session-duration]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--raw-units]
[--UTC]
weka fs reserve [--help]
weka fs reserve status [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
weka fs reserve set <ssd-capacity>
[--org org]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
[--raw-units]
[--UTC]
weka fs reserve unset [--org org]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
[--raw-units]
[--UTC]
weka interface-group [--name name]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--no-header]
[--verbose]
weka interface-group assignment [--name name]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--no-header]
[--verbose]
weka interface-group add <name>
<type>
[--subnet subnet]
[--gateway gateway]
[--allow-manage-gids allow-manage-gids]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--no-header]
[--verbose]
weka interface-group update <name>
[--subnet subnet]
[--gateway gateway]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka interface-group delete <name>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--force]
[--help]
weka interface-group ip-range [--help]
weka interface-group ip-range add <name>
<ips>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka interface-group ip-range delete <name>
<ips>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--force]
[--help]
weka interface-group port [--help]
weka interface-group port add <name>
<server-id>
<port>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka interface-group port delete <name>
<server-id>
<port>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--force]
[--help]
weka local [--help]
weka local install-agent [--no-update] [--help]
weka local diags [--id id]
[--output-dir output-dir]
[--core-dump-limit core-dump-limit]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--collect-cluster-info]
[--tar]
[--verbose]
[--help]
weka local events [--path path]
[--container-name container-name]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
weka local ps [--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
weka local rm [--all] [--force] [--help] [<containers>]...
weka local start [--wait-time wait-time] [--type type]... [--start-and-enable-dependent] [--help] [<container>]...
weka local stop [--reason reason] [--type type]... [--force] [--stop-and-disable-dependent] [--help] [<container>]...
weka local restart [--wait-time wait-time] [--type type]... [--help] [<container>]...
weka local status [--type type]... [--verbose] [--help] [--json] [<container>]...
weka local enable [--type type]... [--help] [<container>]...
weka local disable [--type type]... [--help] [<container>]...
weka local monitoring <enabled> [--type type]... [--help] [<container>]...
weka local run [--container container] [--in in] [--environment environment]... [--help] [<command>]...
weka local reset-data [--container container] [--clean-unused] [--force] [--help] [<version-name>]...
weka local resources [--container container] [--stable] [--help] [--json] [--raw-units] [--UTC]
weka local resources import <path> [--container container] [--with-identifiers] [--help] [--force]
weka local resources export <path> [--container container] [--staging] [--stable] [--help]
weka local resources restore [--container container] [--help]
weka local resources apply [--container container] [--help] [--force]
weka local resources cores <cores>
[--container container]
[--frontend-dedicated-cores frontend-dedicated-cores]
[--drives-dedicated-cores drives-dedicated-cores]
[--compute-dedicated-cores compute-dedicated-cores]
[--core-ids core-ids]...
[--no-frontends]
[--only-drives-cores]
[--only-compute-cores]
[--only-frontend-cores]
[--allow-mix-setting]
[--help]
weka local resources base-port <base-port> [--container container] [--help]
weka local resources memory <memory> [--container container] [--help]
weka local resources dedicate <on> [--container container] [--help]
weka local resources bandwidth <bandwidth> [--container container] [--help]
weka local resources management-ips [--container container] [--help] [<management-ips>]...
weka local resources join-ips [--container container] [--help] [<management-ips>]...
weka local resources failure-domain [--container container] [--name name] [--auto] [--help]
weka local resources net [--container container] [--stable] [--help] [--json]
weka local resources net add <device>
[--container container]
[--gateway gateway]
[--netmask netmask]
[--name name]
[--label label]
[--vfs vfs]
[--ips ips]...
[--help]
weka local resources net remove <name> [--container container] [--help]
weka local setup [--help]
weka local setup weka [--name name] [--disable] [--no-start] [--help]
weka local setup container [--name name]
[--cores cores]
[--frontend-dedicated-cores frontend-dedicated-cores]
[--drives-dedicated-cores drives-dedicated-cores]
[--compute-dedicated-cores compute-dedicated-cores]
[--memory memory]
[--bandwidth bandwidth]
[--failure-domain failure-domain]
[--timeout timeout]
[--container-id container-id]
[--base-port base-port]
[--resources-path resources-path]
[--weka-version weka-version]
[--core-ids core-ids]...
[--management-ips management-ips]...
[--join-ips join-ips]...
[--net net]...
[--disable]
[--no-start]
[--no-frontends]
[--only-drives-cores]
[--only-compute-cores]
[--only-frontend-cores]
[--allow-mix-setting]
[--dedicate]
[--force]
[--ignore-used-ports]
[--help]
weka local upgrade [--container container]
[--target-version target-version]
[--upgrade-container-timeout upgrade-container-timeout]
[--prepare-container-timeout prepare-container-timeout]
[--container-action-timeout container-action-timeout]
[--allow-not-ready]
[--dont-upgrade-agent]
[--upgrade-dependents]
[--all]
[--help]
weka mount <source>
<target>
[--option option]
[--type type]
[--no-mtab]
[--sloppy]
[--fake]
[--verbose]
[--help]
[--raw-units]
[--UTC]
weka nfs [--help]
weka nfs rules [--help]
weka nfs rules add [--help]
weka nfs rules add dns <name>
<dns>
[--ip ip]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka nfs rules add ip <name>
<ip>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka nfs rules delete [--help]
weka nfs rules delete dns <name>
<dns>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka nfs rules delete ip <name>
<ip>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka nfs client-group [--name name]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--no-header]
[--verbose]
weka nfs client-group add <name>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--no-header]
[--verbose]
weka nfs client-group delete <name>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--force]
[--help]
weka nfs permission [--filesystem filesystem]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--no-header]
[--verbose]
weka nfs permission add <filesystem>
<group>
[--path path]
[--permission-type permission-type]
[--root-squashing root-squashing]
[--squash squash]
[--anon-uid anon-uid]
[--anon-gid anon-gid]
[--obs-direct obs-direct]
[--manage-gids manage-gids]
[--privileged-port privileged-port]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--supported-versions supported-versions]...
[--force]
[--help]
weka nfs permission update <filesystem>
<group>
[--path path]
[--permission-type permission-type]
[--root-squashing root-squashing]
[--squash squash]
[--anon-uid anon-uid]
[--anon-gid anon-gid]
[--obs-direct obs-direct]
[--manage-gids manage-gids]
[--custom-options custom-options]
[--privileged-port privileged-port]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--supported-versions supported-versions]...
[--help]
weka nfs permission delete <filesystem>
<group>
[--path path]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--force]
[--help]
weka nfs interface-group [--name name]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--no-header]
[--verbose]
weka nfs interface-group assignment [--name name]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--no-header]
[--verbose]
weka nfs interface-group add <name>
<type>
[--subnet subnet]
[--gateway gateway]
[--allow-manage-gids allow-manage-gids]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--no-header]
[--verbose]
weka nfs interface-group update <name>
[--subnet subnet]
[--gateway gateway]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka nfs interface-group delete <name>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--force]
[--help]
weka nfs interface-group ip-range [--help]
weka nfs interface-group ip-range add <name>
<ips>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka nfs interface-group ip-range delete <name>
<ips>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--force]
[--help]
weka nfs interface-group port [--help]
weka nfs interface-group port add <name>
<server-id>
<port>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka nfs interface-group port delete <name>
<server-id>
<port>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--force]
[--help]
weka nfs debug-level [--help]
weka nfs debug-level show [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--nfs-hosts nfs-hosts]...
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--no-header]
[--verbose]
weka nfs debug-level set <level>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--nfs-hosts nfs-hosts]...
[--help]
weka nfs global-config [--help]
weka nfs global-config set [--mountd-port mountd-port]
[--config-fs config-fs]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--default-supported-versions default-supported-versions]...
[--help]
[--force]
weka nfs global-config show [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka nfs clients [--help]
weka nfs clients show [--interface-group interface-group]
[--container-id container-id]
[--fip fip]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--no-header]
[--verbose]
weka org [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
weka org create <name>
<username>
[password]
[--ssd-quota ssd-quota]
[--total-quota total-quota]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka org rename <org>
<new-name>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka org set-quota <org>
[--ssd-quota ssd-quota]
[--total-quota total-quota]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka org delete <org>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--force]
[--help]
[--json]
weka security [--help]
weka security kms [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka security kms set <type>
<address>
<key-identifier>
[--token token]
[--namespace namespace]
[--client-cert client-cert]
[--client-key client-key]
[--ca-cert ca-cert]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka security kms unset [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--allow-downgrade]
[--help]
weka security kms rewrap [--new-key-uid new-key-uid]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka security tls [--help]
weka security tls status [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka security tls download <path>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka security tls set [--private-key private-key]
[--certificate certificate]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka security tls unset [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka security lockout-config [--help]
weka security lockout-config set [--failed-attempts failed-attempts]
[--lockout-duration lockout-duration]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka security lockout-config reset [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka security lockout-config show [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka security login-banner [--help]
weka security login-banner set <login-banner>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka security login-banner reset [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka security login-banner enable [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka security login-banner disable [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka security login-banner show [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka security ca-cert [--help]
weka security ca-cert set [--cert-file cert-file]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka security ca-cert status [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka security ca-cert download <path>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka security ca-cert unset [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka smb [--help]
weka smb cluster [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka smb cluster containers [--help]
weka smb cluster containers add [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--container-ids container-ids]...
[--help]
[--force]
weka smb cluster containers remove [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--container-ids container-ids]...
[--help]
[--force]
weka smb cluster wait [--timeout timeout]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka smb cluster update [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--encryption encryption]
[--smb-ips-pool smb-ips-pool]...
[--smb-ips-range smb-ips-range]...
[--help]
weka smb cluster create <netbios-name>
<domain>
<config-fs-name>
[--domain-netbios-name domain-netbios-name]
[--idmap-backend idmap-backend]
[--default-domain-mapping-from-id default-domain-mapping-from-id]
[--default-domain-mapping-to-id default-domain-mapping-to-id]
[--joined-domain-mapping-from-id joined-domain-mapping-from-id]
[--joined-domain-mapping-to-id joined-domain-mapping-to-id]
[--encryption encryption]
[--smb-conf-extra smb-conf-extra]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--container-ids container-ids]...
[--smb-ips-pool smb-ips-pool]...
[--smb-ips-range smb-ips-range]...
[--smb]
[--help]
weka smb cluster debug <level>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--container-ids container-ids]...
[--help]
[--json]
weka smb cluster destroy [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--force]
[--help]
weka smb cluster trusted-domains [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--no-header]
[--verbose]
weka smb cluster trusted-domains add <domain-name>
<from-id>
<to-id>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--force]
[--help]
[--json]
weka smb cluster trusted-domains remove <trusteddomain-id>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--force]
[--help]
weka smb cluster status [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka smb cluster host-access [--help]
weka smb cluster host-access list [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--no-header]
[--verbose]
weka smb cluster host-access reset <mode>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--force]
[--help]
[--json]
weka smb cluster host-access add <mode>
[--timeout timeout]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--ips ips]...
[--hosts hosts]...
[--force]
[--help]
[--json]
weka smb cluster host-access remove [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--force]
[--help]
[--json]
[<hosts>]...
weka smb share [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--no-header]
[--verbose]
weka smb share update <share-id>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--encryption encryption]
[--read-only read-only]
[--allow-guest-access allow-guest-access]
[--hidden hidden]
[--help]
weka smb share lists [--help]
weka smb share lists show [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--no-header]
[--verbose]
weka smb share lists reset <share-id>
<user-list-type>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka smb share lists add <share-id>
<user-list-type>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--users users]...
[--help]
[--json]
weka smb share lists remove <share_id>
<user-list-type>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--users users]...
[--help]
[--json]
weka smb share add <share-name>
<fs-name>
[--description description]
[--internal-path internal-path]
[--file-create-mask file-create-mask]
[--directory-create-mask directory-create-mask]
[--mount-option mount-option]
[--acl acl]
[--obs-direct obs-direct]
[--encryption encryption]
[--read-only read-only]
[--user-list-type user-list-type]
[--allow-guest-access allow-guest-access]
[--hidden hidden]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--share-option share-option]...
[--users users]...
[--force]
[--help]
[--json]
weka smb share remove <share-id>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--force]
[--help]
weka smb share host-access [--help]
weka smb share host-access list [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--no-header]
[--verbose]
weka smb share host-access reset <share-id>
<mode>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--force]
[--help]
[--json]
weka smb share host-access add <share-id>
<mode>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--ips ips]...
[--hosts hosts]...
[--help]
[--json]
weka smb share host-access remove <share_id>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
[<hosts>]...
weka smb domain [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka smb domain join <username>
[password]
[--server server]
[--create-computer create-computer]
[--extra-options extra-options]
[--timeout timeout]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--debug]
[--help]
[--json]
weka smb domain leave <username>
[password]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--debug]
[--force]
[--help]
[--json]
weka stats [--start-time <start>]
[--end-time <end>]
[--interval interval]
[--resolution-secs <secs>]
[--role role]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--category category]...
[--stat stat]...
[--process-ids process-ids]...
[--param param]...
[--output output]...
[--sort sort]...
[--filter filter]...
[--accumulated]
[--per-process]
[--no-zeros]
[--show-internal]
[--skip-validations]
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
weka stats realtime [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--raw-units]
[--UTC]
[--show-total]
[--no-header]
[--verbose]
[<process-ids>]...
weka stats list-types [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--show-internal]
[--help]
[--no-header]
[--verbose]
[<name-or-category>]...
weka stats retention [--help]
weka stats retention set [--days days]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--dry-run]
[--help]
[--json]
weka stats retention status [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka stats retention restore-default [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--dry-run]
[--help]
[--json]
weka status [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
[--raw-units]
[--UTC]
weka status rebuild [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
[--raw-units]
[--UTC]
weka umount <target> [--type type] [--verbose] [--no-mtab] [--lazy-unmount] [--force] [--readonly] [--help]
weka upgrade [--help]
weka upgrade supported-features [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka user [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--no-header]
[--verbose]
weka user login [username]
[password]
[--org org]
[--path path]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka user logout [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka user whoami [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--help]
[--no-header]
[--verbose]
weka user passwd [password]
[--username username]
[--current-password current-password]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka user change-role <username>
<role>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka user update <username>
[--posix-uid posix-uid]
[--posix-gid posix-gid]
[--role role]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka user add <username>
<role>
[password]
[--posix-uid posix-uid]
[--posix-gid posix-gid]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka user delete <username>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka user revoke-tokens <username>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka user generate-token [--access-token-timeout access-token-timeout]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka user ldap [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka user ldap setup <server-uri>
<base-dn>
<user-object-class>
<user-id-attribute>
<group-object-class>
<group-membership-attribute>
<group-id-attribute>
<reader-username>
[--cluster-admin-group cluster-admin-group]
[--org-admin-group org-admin-group]
[--regular-group regular-group]
[--readonly-group readonly-group]
[--start-tls start-tls]
[--ignore-start-tls-failure ignore-start-tls-failure]
[--server-timeout-secs server-timeout-secs]
[--protocol-version protocol-version]
[--user-revocation-attribute user-revocation-attribute]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka user ldap setup-ad <server-uri>
<domain>
<reader-username>
[--cluster-admin-group cluster-admin-group]
[--org-admin-group org-admin-group]
[--regular-group regular-group]
[--readonly-group readonly-group]
[--start-tls start-tls]
[--ignore-start-tls-failure ignore-start-tls-failure]
[--server-timeout-secs server-timeout-secs]
[--user-revocation-attribute user-revocation-attribute]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka user ldap update [--server-uri server-uri]
[--base-dn base-dn]
[--user-object-class user-object-class]
[--user-id-attribute user-id-attribute]
[--group-object-class group-object-class]
[--group-membership-attribute group-membership-attribute]
[--group-id-attribute group-id-attribute]
[--reader-username reader-username]
[--reader-password reader-password]
[--cluster-admin-group cluster-admin-group]
[--org-admin-group org-admin-group]
[--regular-group regular-group]
[--readonly-group readonly-group]
[--start-tls start-tls]
[--certificate certificate]
[--ignore-start-tls-failure ignore-start-tls-failure]
[--server-timeout-secs server-timeout-secs]
[--protocol-version protocol-version]
[--user-revocation-attribute user-revocation-attribute]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka user ldap enable [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka user ldap disable [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--force]
[--help]
[--json]
weka user ldap reset [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--force]
[--help]
[--json]
weka version [--help] [--json]
weka version supported-specs [--help]
weka version get <version> [--from from]... [--set-current] [--no-progress-bar] [--set-dist-servers] [--help]
weka version set <version>
[--container container]
[--allow-running-containers]
[--default-only]
[--agent-only]
[--set-dependent]
[--help]
weka version unset [--help]
weka version current [--container container] [--help]
weka version rm [--clean-unused] [--force] [--help] [<version-name>]...
weka version prepare <version-name> [--help] [<containers>]...
weka s3 [--help]
weka s3 cluster [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--verbose]
[--help]
[--json]
weka s3 cluster create <default-fs-name>
<config-fs-name>
[--port port]
[--key key]
[--secret secret]
[--max-buckets-limit max-buckets-limit]
[--anonymous-posix-uid anonymous-posix-uid]
[--anonymous-posix-gid anonymous-posix-gid]
[--domain domain]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--container container]...
[--all-servers]
[--force]
[--help]
weka s3 cluster update [--key key]
[--secret secret]
[--port port]
[--anonymous-posix-uid anonymous-posix-uid]
[--anonymous-posix-gid anonymous-posix-gid]
[--domain domain]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--container container]...
[--all-servers]
[--force]
[--help]
weka s3 cluster destroy [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--force]
[--help]
weka s3 cluster status [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka s3 cluster audit-webhook [--help]
weka s3 cluster audit-webhook enable [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--endpoint endpoint]
[--auth-token auth-token]
[--help]
[--verify]
weka s3 cluster audit-webhook disable [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka s3 cluster audit-webhook show [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka s3 cluster containers [--help]
weka s3 cluster containers add [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[<container-ids>]...
weka s3 cluster containers remove [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[<container-ids>]...
weka s3 cluster containers list [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka s3 bucket [--help]
weka s3 bucket create <name>
[--policy policy]
[--policy-json policy-json]
[--hard-quota hard-quota]
[--existing-path existing-path]
[--fs-name fs-name]
[--fs-id fs-id]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--force]
[--help]
[--json]
weka s3 bucket list [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--raw-units]
[--UTC]
[--no-header]
[--verbose]
weka s3 bucket destroy <name>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--unlink]
[--help]
[--json]
[--force]
weka s3 bucket lifecycle-rule [--help]
weka s3 bucket lifecycle-rule add <bucket>
<expiry-days>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--prefix prefix]
[--tags tags]
[--help]
[--json]
weka s3 bucket lifecycle-rule remove <bucket>
<name>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka s3 bucket lifecycle-rule reset <bucket>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
[--force]
weka s3 bucket lifecycle-rule list <bucket>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--no-header]
[--verbose]
weka s3 bucket policy [--help]
weka s3 bucket policy get <bucket-name>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka s3 bucket policy set <bucket-name>
<bucket-policy>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka s3 bucket policy unset <bucket-name>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka s3 bucket policy get-json <bucket-name>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka s3 bucket policy set-custom <bucket-name>
<policy-file>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka s3 bucket quota [--help]
weka s3 bucket quota set <name>
<hard-quota>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka s3 bucket quota unset <name>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
weka s3 policy [--help]
weka s3 policy list [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--no-header]
[--verbose]
weka s3 policy show <policy-name>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka s3 policy add <policy-name>
<policy-file>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka s3 policy remove <policy>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka s3 policy attach <policy>
<user>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka s3 policy detach <user>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka s3 service-account [--help]
weka s3 service-account list [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--no-header]
[--verbose]
weka s3 service-account show <access_key>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka s3 service-account add [--policy-file policy-file]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka s3 service-account remove <access_key>
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka s3 sts [--help]
weka s3 sts assume-role [--access-key access-key]
[--secret-key secret-key]
[--policy-file policy-file]
[--duration duration]
[--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--help]
[--json]
weka s3 log-level [--help]
weka s3 log-level get [--HOST HOST]
[--PORT PORT]
[--CONNECT-TIMEOUT CONNECT-TIMEOUT]
[--TIMEOUT TIMEOUT]
[--profile profile]
[--format format]
[--container container]...
[--output output]...
[--sort sort]...
[--filter filter]...
[--help]
[--no-header]
[--verbose]