Deploy Local WEKA Home on K8s

Manage the deployment, upgrade, and maintenance of Local WEKA Home (LWH) on Kubernetes (K8s) cluster. This deployment method provides a scalable, on-premises observability solution for WEKA clusters.

Overview

The LWH deployment provides an on-premises observability and monitoring solution for WEKA clusters. Organizations use this model to operate within their own infrastructure instead of relying on the WEKA-hosted cloud service. Running on a K8s cluster offers enhanced scalability, resilience, and control over system resources and data.

Deploying LWH on K8s supports scale-out environments and large cluster configurations. This architecture leverages Kubernetes orchestration capabilities for high availability, automated recovery, and simplified lifecycle management of the LWH components.

circle-info

Deployment on K8s is supported for LWH version 4.x and above.

The deployment is managed through a configuration file to ensure a consistent, reproducible, and upgradeable installation process.

Solution architecture

The diagram below illustrates the overall solution architecture and how the core components interact within the Kubernetes (K8s) environment.

Local WEKA Home v4.x solution architecture

Architecture components

The LWH v4.x solution ingests data from registered WEKA clusters and processes it through the following layers:

  1. Data ingestion layer: WEKA clusters send metrics, events, and alerts to LWH API endpoints.

  2. API and ingress layer: Handles HTTP ingestion and routing. It supports multiple ingress controllers (ALB, Traefik, or Nginx) and can use an Envoy-based gateway service. API endpoints receive data and forward it to the persistent queue layer.

  3. Processing layer: Uses NATS (persistent queues) for durable message storage and buffering. Worker services consume messages from these queues to process statistical data, events, and alerts.

  4. Storage layer: Consists of specialized databases including a Postgres Database for metadata and a Victoria Metrics Cluster for raw time-series metrics. A secondary Victoria Metrics instance is used for internal application monitoring.

  5. User interface layer: Provides a Grafana Dashboard for visualization and an LWH UI for managing rules and configurations.

Data flow

  1. WEKA clusters send statistics, events, and alerts to API endpoints.

  2. API components authenticate and validate incoming data.

  3. Data is ingested into NATS persistent queues for reliable buffering.

  4. Worker services consume messages from queues and process them.

  5. Processed data is written to the appropriate databases:

    • Metrics are stored in the Victoria Metrics Cluster.

    • Events, alerts, and cluster metadata are stored in the Postgres Database.

  6. The rules engine evaluates conditions and triggers configured integrations.

  7. Grafana queries the databases to provide visual health and performance data.

Sizing and scaling guidelines

Determine the resource requirements and scaling behavior for a Local WEKA Home (LWH) deployment to ensure consistent performance across the platform.

Scaling fundamentals

The load on LWH scales linearly with the number of unique (host_id, node_id) metric pairs. These pairs represent the intersection of every monitored server and every active WEKA process.

  • Metric pair: The primary unit of measure for stats processing capacity.

  • WEKA process: Includes cores, backends, clients, and management processes. On average, a cluster generates metric pairs at a 1:1 ratio with its total process count.

Deployment estimation

For initial planning, use the following guidelines to ensure the stats workers can handle the ingestion and processing load with sufficient headroom.

  • Baseline capacity: Supports up to 40,000 WEKA processes by default.

  • CPU core estimate: Allocate approximately 2 CPU cores for every 1,000 WEKA processes.

  • Time series density: Each process typically generates 2,000 unique time series.

For high stats throughput, use the detailed sizing formulas in LWH stats: sizing and performance optimization.

Component scaling behavior

LWH components are pre-configured to handle standard production loads. Adjustments are only required when approaching the limits of the default installation.

API and Workers

100,000 processes

Scale dynamically based on load using Horizontal Pod Autoscaling (HPA).

VictoriaMetrics (VM)

80,000 processes

Operates as a stateful set. High loads may require manual adjustment of CPU, memory, or shard count.

NATS

100,000 processes

Managed through the STATS stream. Default limit is 3 GiB.

Postgres

N/A

Typically maintains low utilization. Relies on quick failover and fast CSI reattachment via the WEKA filesystem.

Key tuning parameters

While the defaults handle common loads, tune the following parameters for very large or small deployments:

  • VMCluster: Adjust the CPU, memory, shard count, or capacity. You can reduce these resources for smaller deployments to save infrastructure costs.

  • Stats workers: The default memory setting is 1 GiB. Processing statistics for approximately 40,000 processes requires approximately 40 CPU cores (hyperthreads).

  • Worker autoscaling: To prevent the HPA from resetting during redeployments, set workers.stats.autoscaling.minReplicas to match your calculated baseline usage.

Prerequisites

Before installing the Local WEKA Home, ensure the environment meets the following requirements.

Storage

A CSI (Container Storage Interface) driver is required. The storage class must support sharing or moving volumes between nodes, such as the WEKA CSI driver or Amazon EBS.

VictoriaMetrics Operator

The VictoriaMetrics Operator needs to be installed separately before installing the Local WEKA Home chart.

This separate installation prevents issues during uninstallation, such as Custom Resource (CR) objects becoming stuck, which can occur if the operator is auto-installed as a chart dependency.

The required installation method is Helm. The Operator Lifecycle Manager (OLM) method is not supported (see VictoriaMetrics Operator note about Setup chart repositoryarrow-up-right).

Procedure

  1. Run the following Helm command to install the operator. This command:

    • Installs version 0.39.1.

    • Creates and uses the victoria-metrics namespace.

    • Names the release vmo.

Related information

VictoriaMetrics Operator official documentationarrow-up-right

Deployment workflow

  1. Configure Helm values: Create a values.yaml file to customize your WEKA Home deployment.

  2. Install the LWH: Follow one of the methods for deploying LWH on a Kubernetes environment: standard Helm installation or ArgoCD integration.

  3. Configure networking and access: Set up ingress or gateway service access

Configure Helm values

Create a values.yaml file to customize your WEKA Home deployment. This file overrides the chart's default settings.

The following example highlights common adjustments, particularly for specifying a WEKA storage class for persistent volumes and using nodeSelector to schedule pods onto specific nodes (such as those running WEKA clients).

chevron-rightExample values.yamlhashtag

Configure gateway TLS (Optional)

If you enable TLS for the gateway (gateway.tls: true), you must manually create a Kubernetes secret containing your certificate and private key before installing the chart. The gateway.secretName value in your values.yaml must match the name of this secret.

chevron-rightExample TLS secret manifesthashtag

Ensure the cert.pem and key.pem data fields contain your Base64-encoded certificate and key content.

Install the LWH

You can deploy LWH on a Kubernetes environment using two primary methods: standard Helm installation or ArgoCD integration. Each method differs in setup complexity, ingress handling, and lifecycle management.

Feature
Standard Helm Installation
ArgoCD Integration

Method

Direct installation using Helm commands.

Integration with an ArgoCD application.

Requirements

Standard Helm CLI.

LWH v4.1.0-b40 or higher.

Configuration

Straightforward deployment.

Requires special handling for Helm hooks, secrets, and job lifecycle.

Secrets

Auto-generated during deployment.

Requires manual pre-creation of secrets.

Recommendation

Recommended for most standard deployments.

Suitable for environments managing applications using GitOps with ArgoCD.

Install the LWH using Helm

Use this procedure for a standard deployment of LWH using Helm commands.

The LWH Helm chart is publicly available on GitHub. The documentation on GitHub reflects the latest build. For a specific version, download the required values.yaml file directly.

Procedure

  1. Add the WEKA Home Helm repository:

  2. Run the Helm upgrade command to install or update the chart. Specify your namespace, the chart version, and the path to your customized values.yaml file.

Integrate the LWH with ArgoCD

Use this procedure to deploy LWH using ArgoCD.

ArgoCD integration requires version v4.1.0-b40 or higher. This method requires specific configuration adjustments because ArgoCD handles Helm charts differently than a standard Helm installation.

  • Helm hooks and jobs: ArgoCD uses alternative hook annotations. Job TTL (Time-To-Live) requires special handling to avoid conflicts.

  • Secrets: ArgoCD does not support the Helm lookup function. You must manually create all required secrets before deployment.

  • Ingress: Ingress updates in ArgoCD can be slow. If you use a gateway service instead of ingress, disable the ingress resource to improve update speeds.

  • Dashboards: LWH dashboards (starting from v4.1.0-b40) include an annotation (argocd.argoproj.io/sync-options: Replace=true) to manage ConfigMap size limits.

Procedure

  1. Configure Helm values for ArgoCD In your values.yaml file, set the following parameters:

    • Set generateSecrets: false at the top level.

    • To prevent conflicts with ArgoCD's job management, set the TTL for migration jobs:

    • (Optional) If you use a gateway service and not ingress, disable ingress creation:

  2. Pre-create required secrets. Because ArgoCD does not support the Helm lookup function, you must create the secrets manually.

    You can use the following script as a template. Update the NAMESPACE and ARGO_APP_NAME variables to match your environment.

chevron-rightCreating secrets script templatehashtag
  1. Deploy the application Deploy the LWH Helm chart using your standard ArgoCD application definition. Ensure it references the values.yaml file you configured and uses the pre-created secrets.

chevron-rightArgo end-to-end examplehashtag

Configure networking and access

Review the recommended methods for configuring network access to the LWH.

While WEKA Home supports various ingress controllers (such as ALB, Nginx, and Traefik), the simplest approaches are:

  • Use an Ingress Controller: Wrap the gateway service with your cluster's standard ingress configuration, such as a VirtualService if you use Istio.

  • Use a NodePort: Configure the service type as NodePort. This method is ideal for dedicated nodes that do not require an external load balancer.

Upgrade Local WEKA Home

Use this procedure to upgrade an existing LWH deployment to a new version using Helm.

Before you begin

  • Ensure you have the path to your customized values.yaml file.

  • Identify the new chart version you want to upgrade to.

Procedure

  1. Update your local Helm repository to fetch the latest chart versions:

  2. Run the helm upgrade command.

    • This command uses --install to upgrade the existing wekahome release.

    • Replace <new-version> with the specific chart version you are upgrading to.

    • Ensure the --namespace and --values flags point to your existing deployment's configuration.

Last updated