Skip to content

Installing with Kong


Requirements for the Kong platform:

One of the following points is required for proper Kong operation:

  • prepared configuration files,
  • configured database.

Please make sure that the installed Kong meets the prerequisites before proceeding with Wallarm installation.

The official Kong documentation is available at this link.

Known Limitations


Installation of postanalytics on a separate server

If you are planning to install postanalytics on a separate server, you must install postanalytics first.

See details in Separate postanalytics module installation.

To install the Wallarm module with Kong, you need to:

  1. Add Wallarm repositories.

  2. Install Wallarm packages.

  3. Configure postanalytics.

  4. Set up the filter node for using a proxy server.

  5. Connect the filter node to the Wallarm cloud.

  6. Configure the postanalytics server addresses.

  7. Configure the filtration mode.

  8. Configure logging.


  • Prior to taking any steps listed below, either disable or configure SELinux if it is installed on the operating system.
  • Make sure that you execute all commands below as superuser (e.g. root).

If Wallarm node is already installed in your environment

If you install Wallarm node instead of an already existing Wallarm node or need to duplicate the installation in the same environment, then please keep the same node version as currently used or update all installations to the latest version. For the postanalytics installed separately, versions of substite or duplicate installations must be the same as already installed postanalytics too.

To check the installed version of filtering node and postanalytics installed on the same server:

apt list wallarm-node
apt list wallarm-node
yum list wallarm-node

To check the versions of filtering node and postanalytics installed on different servers:

# run from the server with installed Wallarm filtering node
apt list wallarm-node-nginx
# run from the server with installed postanalytics
apt list wallarm-node-tarantool
# run from the server with installed Wallarm filtering node
apt list wallarm-node-nginx
# run from the server with installed postanalytics
apt list wallarm-node-tarantool
# run from the server with installed Wallarm filtering node
yum list wallarm-node-nginx
# run from the server with installed postanalytics
yum list wallarm-node-tarantool

More information about Wallarm node versioning is available in the Wallarm node versioning policy.

1. Add Wallarm Repositories

The filter node installs and updates from the Wallarm repositories.

Depending on your operating system, run one of the following commands:

curl -fsSL | sudo apt-key add -
sh -c "echo 'deb bionic/3.4/' | sudo tee /etc/apt/sources.list.d/wallarm.list"
sudo apt update
sudo yum install -y epel-release
sudo rpm -i

Repository access

Your system must have access to to download the packages.

Ensure the access is not blocked by a firewall.

2. Install Wallarm Packages

To install the filter node and postanalytics on the same server, run the command:

sudo apt install --no-install-recommends wallarm-node kong-module-wallarm
sudo yum install wallarm-node kong-module-wallarm

To install the filter node alone, run the command:

sudo apt install --no-install-recommends wallarm-node-nginx kong-module-wallarm
sudo yum install wallarm-node-nginx kong-module-wallarm

3. Configure Postanalytics


Skip this step if you installed postanalytics on a separate server as you already have your postanalytics configured.

The amount of memory determines the quality of work of the statistical algorithms.

For production environments, the recommended amount of RAM allocated for the postanalytics module is 75% of the total server memory. For example, if the server has 32 GB of memory, the recommended allocation size is 24 GB. If testing the Wallarm node or having a small server size, the lower amount can be enough (e.g. 25% of the total memory).

Allocate the operating memory size for Tarantool:

Open for editing the configuration file of Tarantool:

sudo vim /etc/default/wallarm-tarantool
sudo vim /etc/sysconfig/wallarm-tarantool

Set the allocated memory size in the configuration file of Tarantool via the SLAB_ALLOC_ARENA directive. The value can be an integer or a float (a dot . is a decimal separator).

For example:


Restart Tarantool:

sudo service wallarm-tarantool restart
sudo systemctl restart wallarm-tarantool

4. Set up the Filter Node for Using a Proxy Server


This setup step is intended for users who use their own proxy server for the operation of the protected web applications.

If you do not use a proxy server, skip this step of the setup.

You need to assign new values to the environment variables, which define the proxy server used, to configure Wallarm node for using your proxy server.

Add new values of the environment variables to the /etc/environment file:

  • Add https_proxy to define a proxy for the https protocol.

  • Add http_proxy to define a proxy for the http protocol.

  • Add no_proxy to define the list of the resources proxy should not be used for.

Assign the <scheme>://<proxy_user>:<proxy_pass>@<host>:<port> string values to the https_proxy and http_proxy variables.

  • <scheme> defines the protocol used. It should match the protocol that the current environment variable sets up proxy for.

  • <proxy_user> defines the username for proxy authorization.

  • <proxy_pass> defines the password for proxy authorization.

  • <host> defines a host of the proxy server.

  • <port> defines a port of the proxy server.

Assign a "<res_1>, <res_2>, <res_3>, <res_4>, ..." array value, where <res_1>, <res_2>, <res_3>, and <res_4> are the IP addresses and/or domains, to the no_proxy variable to define a list of the resources which proxy should not be used for. This array should consist of IP addresses and/or domains.

Resources that need to be addressed without a proxy

Add the following IP addresses and domain to the list of the resources that have to be addressed without a proxy for the system to operate correctly:,,, and localhost.
The and IP addresses are used for the operation of the Wallarm filter node.

The example of the correct /etc/environment file contents below demonstrates the following configuration:

  • HTTPS and HTTP requests are proxied to the host with the 1234 port, using the admin username and the 01234 password for authorization on the proxy server.

  • Proxying is disabled for the requests sent to,,, and localhost.

no_proxy=",,, localhost"

5. Connect the Filter Node to the Wallarm Cloud

API Access

The API choice for your filter node depends on the Cloud you are using. Please, select the API accordingly:

Ensure the access is not blocked by a firewall.

The filter node interacts with the Wallarm cloud.

To connect the node to the cloud using your cloud account requisites, proceed with the following steps:

  1. Make sure that your Wallarm account has the Administrator or Deploy role enabled and two-factor authentication disabled, therefore allowing you to connect a filter node to the cloud.

    You can check the above mentioned parameters by navigating to the user account list in the Wallarm console.

    User list in Wallarm console

  2. Run the addnode script in a system with the filter node:


    You have to pick the script to run depending on the Cloud you are using.

    sudo /usr/share/wallarm-common/addnode
    sudo /usr/share/wallarm-common/addnode -H

    To specify the name of the created node, use the -n <node name> option. Also, the node name can be changed in Wallarm Console → Nodes.

  3. Provide your Wallarm account’s login and password when prompted.

6. Configure the Postanalytics Server Addresses


  • Skip this step if you installed postanalytics and the filter node on the same server.
  • Do this step if you installed postanalytics and the filter node on separate servers.

Add the server address of postanalytics to /etc/kong/nginx-wallarm.template:

upstream wallarm_tarantool {
    server <ip1>:3313 max_fails=0 fail_timeout=0 max_conns=1;
    server <ip2>:3313 max_fails=0 fail_timeout=0 max_conns=1;

    keepalive 2;

# omitted

wallarm_tarantool_upstream wallarm_tarantool;

Required conditions

It is required that the following conditions are satisfied for the max_conns and the keepalive parameters:

  • The value of the keepalive parameter must not be lower than the number of the tarantool servers.
  • The value of the max_conns parameter must be specified for each of the upstream Tarantool servers to prevent the creation of excessive connections.

7. Set up the Filtration Mode

The filtering and proxying rules are configured in the /etc/kong/nginx-wallarm.template file.

To see detailed information about working with NGINX configuration files, proceed to the official NGINX documentation.

Wallarm directives define the operation logic of the Wallarm filter node. To see the list of Wallarm directives available, proceed to the Wallarm configuration options page.

A Configuration File Example

Let us suppose that you need to configure the server to work in the following conditions:

  • Only HTTP traffic is processed. There are no HTTPS requests processed.

  • The following domains receive the requests: and

  • All requests must be passed to the server

  • All incoming requests are considered less than 1MB in size (default setting).

  • The processing of a request takes no more than 60 seconds (default setting).

  • Wallarm must operate in the monitor mode.

  • Clients access the filter node directly, without an intermediate HTTP load balancer.

To meet the listed conditions, the contents of the configuration file must be the following:

    server {
      listen 80;
      listen [::]:80 ipv6only=on;

      # the domains for which traffic is processed

      # turn on the monitoring mode of traffic processing
      wallarm_mode monitoring; 
      # wallarm_instance 1;

      location / {
        # setting the address for request forwarding
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

8. Configure Logging

Configure the filter node variables logging using NGINX. This will allow to perform a quick filter node diagnostics with the help of the NGINX log file.

Start Kong

To start Kong with the installed Wallarm module, run the command:

kong start --nginx-conf /etc/kong/nginx-wallarm.template

The Installation Is Complete

Check that the filter node runs and filters the traffic. See Check the filter node operation.

Default Settings

A freshly installed filter node operates in blocking mode (see the wallarm_mode directive description) by default.

This may result in the inoperable Wallarm scanner. If you plan to use the scanner, then you need to perform additional actions to render scanner operational.

Additional Settings

The filter node may require some additional configuration after installation.

The document below lists a few of the typical setups that you can apply if needed.

To get more information about other available settings, proceed to the “Configuration” section of the Administrator’s Guide.

Configuring the Display of the Client's Real IP

If the filter node is deployed behind a proxy server or load balancer without any additional configuration, the request source address may not be equal to the actual IP address of the client. Instead, it may be equal to one of the IP addresses of the proxy server or the load balancer.

In this case, if you want the filter node to receive the client's IP address as a request source address, you need to perform an additional configuration of the proxy server or the load balancer.

Limiting the Single Request Processing Time

Use the wallarm_process_time_limit Wallarm directive to specify the limit of the duration for processing a single request by the filter node.

If processing the request consumes more time than specified in the directive, then the information on the error is entered into the log file and the request is marked as an overlimit_res attack.

Limiting the Server Reply Waiting Time

Use the proxy_read_timeout NGINX directive to specify the timeout for reading the proxy server reply.

If the server sends nothing during this time, the connection is closed.

Limiting the Maximum Request Size

Use the client_max_body_size NGINX directive to specify the limit for the maximum size of the body of the client's request.

If this limit is exceeded, NGINX replies to the client with the 413 (Payload Too Large) code, also known as the Request Entity Too Large message.