Allocating Resources for Wallarm NGINX Node¶
The amount of memory and CPU resources allocated for the Wallarm NGINX node determines the quality and speed of request processing. These instructions describe the recommendations for self-hosted NGINX node memory allocation.
In an NGINX filtering node there are two main memory and CPU consumers:
-
wstore, also called postanalytics module. This is the local data analytics backend and the primary memory consumer in a filtering node.
-
NGINX is the main filtering node and reverse proxy component.
NGINX CPU utilization depends on many factors like RPS level, average size of request and response, number of custom ruleset rules handled by the node, types and layers of employed data encodings like Base64 or data compression, etc.
On average, one CPU core can handle about 500 RPS. When running in production mode, it is recommended to allocate at least 1 CPU core for the NGINX process and 1 core for the wstore process. In the majority of cases it is recommended to initially over-provision a filtering node, see the actual CPU and memory usage for real production traffic levels, and gradually reduce allocated resources to a reasonable level (with at least 2x headroom for traffic spikes and node redundancy).
wstore¶
Postanalytics uses the in-memory storage wstore. The wstore database is used to keep in a circular buffer a local copy of the data stream processed by a filtering node, including request/response headers and request bodies (but not response bodies).
To make a filtering node efficient, the database should keep at least 15 minutes of transmitted data with about 2x overhead for data serialization. Following these points, the amount of memory can be estimated by the formula:
For example, if a filtering node is handling at peak 50 Mbps of end user requests, the required wstore database memory consumption can be estimated as the following:
Allocating resources in Kubernetes Ingress Controller¶
wstore memory is configured using the following sections in the values.yaml file:
-
To set up memory in GB:
-
To set up memory in CPU:
Listed parameters are set by using the --set option of the commands helm install and helm upgrade, for example:
helm install --set controller.wallarm.postanalytics.arena='1.0' <INGRESS_CONTROLLER_RELEASE_NAME> wallarm/wallarm-ingress -n <KUBERNETES_NAMESPACE>
There are also other parameters required for correct Ingress controller installation. Please pass them in the --set option too.
Allocating resources if using All-in-One installer¶
The sizing of wstore memory is controlled using the SLAB_ALLOC_ARENA attribute in the /opt/wallarm/env.list configuration file. To allocate memory:
-
Open for editing the
/opt/wallarm/env.listfile: -
Set the
SLAB_ALLOC_ARENAattribute to memory size. The value can be an integer or a float (a dot.is a decimal separator). For example: -
Restart the Wallarm services:
Allocating resources if using NGINX-based Docker image¶
The sizing of wstore memory is controlled using the SLAB_ALLOC_ARENA environment variable which is passed either in Docker run command or in mounted configuration file.
Example:
docker run -d -e WALLARM_API_TOKEN='XXXX' -e WALLARM_LABELS='group=<GROUP NAME>' -e NGINX_BACKEND='example.com' -e SLAB_ALLOC_ARENA=3.0 -p 80:80 wallarm/node:6.6.0
Note that when passing SLAB_ALLOC_ARENA in Docker run command with the -e like in the example above, the variable is not recorded in any configuration file within the container, but it is still used when wstore starts.
Used value can be checked in wcli-out.log filtering node log by searching for the Setting up memory params line.
Allocating resources if using Amazon Machine Image¶
-
The Wallarm node automatically distributes allocated resources between wstore and NGINX.
-
When launching a Wallarm node instance from the Wallarm NGINX Node AMI, we recommend using the
t3.mediuminstance type for testing andm4.xlargefor production.
NGINX¶
NGINX memory consumption depends on many factors. On average it can be estimated as the following:
For example:
-
Filtering node is processing at peak 10000 concurrent requests,
-
average request size is 5 kB.
The NGINX memory consumption can be estimated as follows:
To allocate the amount of memory:
-
for the NGINX Ingress controller pod (
ingress-controller), configure the following sections in thevalues.yamlfile by using the--setoption ofhelm installorhelm upgrade:
Example of commands changing the parameters:
helm install --set controller.resources.limits.cpu='2000m',controller.resources.limits.memory='3280Mi' <INGRESS_CONTROLLER_RELEASE_NAME> wallarm/wallarm-ingress -n <KUBERNETES_NAMESPACE>There are also other parameters required for correct Ingress controller installation. Please pass them in the
--setoption too. -
for other deployment options, use the NGINX configuration files.
Troubleshooting¶
If a Wallarm node consumes more memory and CPU than it was expected, to reduce resource usage, get familiar with the recommendations from the CPU high usage troubleshooting article and follow them.