Skip to content

Allocating Resources for Wallarm Node

The amount of memory allocated for the filtering node determines the quality and speed of request processing. These instructions describe the recommendations for filtering node memory allocation.

In a filtering node there are two main memory consumers:

  • Tarantool, also called postanalytics module. This is the local data analytics backend and the primary memory consumer in a filtering node.

  • NGINX is the main filtering node and reverse proxy component.

Tarantool

Postanalytics uses the in-memory storage Tarantool. The Tarantool database is used to keep in a circular buffer a local copy of the data stream processed by a filtering node, including request/response headers and request bodies (but not response bodies).

To make a filtering node efficient, the database should keep at least 15 minutes of transmitted data with about 2x overhead for data serialization. Following these points, the amount of memory can be estimated by the formula:

Speed of request processing per minute in bytes * 15 * 2

For example, if a filtering node is handling at peak 50 Mbps of end user requests, the required Tarantool database memory consumption can be estimated as the following:

50 Mbps / 8 (bits in a byte) * 60 (seconds in a minute) * 15 * 2 = 11,250 MB (or ~ 11 GB)

Allocating Resources in Kubernetes Ingress Controller

Tarantool memory is configured for the ingress-controller-wallarm-tarantool pod using the following sections in the values.yaml file:

  • To set up memory in GB:

    controller:
      wallarm:
        tarantool:
          arena: "1.0"
    

  • To set up memory in CPU:

    controller:
      wallarm:
        tarantool:
          resources:
            limits:
              cpu: 400m
              memory: 3280Mi
            requests:
              cpu: 200m
              memory: 1640Mi
    

Listed parameters are set by using the --set option of the commands helm install and helm upgrade, for example:

helm install --set controller.wallarm.tarantool.arena='1.0' <INGRESS_CONTROLLER_RELEASE_NAME> wallarm/wallarm-ingress -n <KUBERNETES_NAMESPACE>

There are also other parameters required for correct Ingress controller installation. Please pass them in the --set option too.

helm upgrade --reuse-values --set controller.wallarm.tarantool.arena='0.4' <INGRESS_CONTROLLER_RELEASE_NAME> wallarm/wallarm-ingress -n <KUBERNETES_NAMESPACE>

Allocating Resources in Other Deployment Options

The sizing of Tarantool memory is controlled using the SLAB_ALLOC_ARENA attribute in the /etc/default/wallarm-tarantool configuration file. To allocate memory:

  1. Open for editing the configuration file of Tarantool:
sudo vim /etc/default/wallarm-tarantool
sudo vim /etc/default/wallarm-tarantool
sudo vim /etc/default/wallarm-tarantool
sudo vim /etc/default/wallarm-tarantool
sudo vim /etc/default/wallarm-tarantool
sudo vim /etc/sysconfig/wallarm-tarantool
sudo vim /etc/sysconfig/wallarm-tarantool
sudo vim /etc/sysconfig/wallarm-tarantool
sudo vim /etc/sysconfig/wallarm-tarantool
  1. Set the SLAB_ALLOC_ARENA attribute to memory size. The value can be an integer or a float (a dot . is a decimal separator). For example:
SLAB_ALLOC_ARENA=10.4
  1. Restart Tarantool:
sudo systemctl restart wallarm-tarantool
sudo systemctl restart wallarm-tarantool
sudo systemctl restart wallarm-tarantool
sudo service wallarm-tarantool restart
sudo service wallarm-tarantool restart
sudo systemctl restart wallarm-tarantool
sudo systemctl restart wallarm-tarantool
sudo systemctl restart wallarm-tarantool
sudo systemctl restart wallarm-tarantool

To learn how long a Tarantool instance is capable of keeping traffic details with the current level of filtering node load, you can use the wallarm-tarantool/gauge-timeframe_size monitoring metric.

NGINX

NGINX memory consumption depends on many factors. On average it can be estimated as the following:

Number of concurrent request * Average request size * 3

For example:

  • Filtering node is processing at peak 10000 concurrent requests,

  • average request size is 5 kB.

The NGINX memory consumption can be estimated as follows:

10000 * 5 kB * 3 = 150000 kB (or ~150 MB)

To allocate the amount of memory:

  • for the NGINX Ingress controller pod (ingress-controller), configure the following sections in the values.yaml file by using the --set option of helm install or helm upgrade:

    controller:
      resources:
        limits:
          cpu: 1000m
          memory: 1640Mi
        requests:
          cpu: 1000m
          memory: 1640Mi
    

    Example of commands changing the parameters:

    helm install --set controller.resources.limits.cpu='2000m',controller.resources.limits.memory='3280Mi' <INGRESS_CONTROLLER_NAME> wallarm/wallarm-ingress -n <KUBERNETES_NAMESPACE>
    

    There are also other parameters required for correct Ingress controller installation. Please pass them in the --set option too.

    helm upgrade --reuse-values --set controller.resources.limits.cpu='2000m',controller.resources.limits.memory='3280Mi' <INGRESS_CONTROLLER_NAME> wallarm/wallarm-ingress -n <KUBERNETES_NAMESPACE>
    
  • for other deployment options, use the NGINX configuration files.

Recommendations from the CPU utilization perspective

When running in production mode, it is recommended to allocate at least one CPU core for the NGINX process and one core for the Tarantool process.

Actual NGINX CPU utilization depends on many factors like RPS level, average size of request and response, number of LOM rules handled by the node, types and layers of employed data encodings like Base64 or data compression, etc. On average, one CPU core can handle about 500 RPS. In the majority of cases it is recommended to initially over-provision a filtering node, see the actual CPU and memory usage for real production traffic levels, and gradually reduce allocated resources to a reasonable level (with at least 2x headroom for traffic spikes and node redundancy).