Installing dynamic Wallarm module for NGINX stable from NGINX repository¶
These instructions describe the steps to install Wallarm filtering node as a dynamic module for the open source version of NGINX stable
that was installed from the NGINX repository.
Requirements¶
-
Access to the account with the Administrator role in Wallarm Console for the US Cloud or EU Cloud
-
SELinux disabled or configured upon the instructions
-
NGINX version 1.22.1
Custom NGINX versions
If you have a different version, see how to connect the Wallarm module to custom build of NGINX
-
Executing all commands as a superuser (e.g.
root
) -
For the request processing and postanalytics on different servers: postanalytics installed on the separate server upon the instructions
-
Access to
https://repo.wallarm.com
to download packages. Ensure the access is not blocked by a firewall -
Access to
https://us1.api.wallarm.com
for working with US Wallarm Cloud or tohttps://api.wallarm.com
for working with EU Wallarm Cloud. If access can be configured only via the proxy server, then use the instructions -
Access to GCP storage addresses to download an actual list of IP addresses registered in allowlisted, denylisted, or graylisted countries, regions or data centers
-
Installed text editor vim, nano, or any other. In the instruction, vim is used
Installation options¶
The processing of requests in the Wallarm node is divided into two stages:
-
Primary processing in the NGINX-Wallarm module. The processing is not memory demanding and can be put on frontend servers without changing the server requirements.
-
Statistical analysis of the processed requests in the postanalytics module. Postanalytics is memory demanding, which may require changes in the server configuration or installation of postanalytics on a separate server.
Depending on the system architecture, the NGINX-Wallarm and postanalytics modules can be installed on the same server or on different servers.
Installation commands for both options are described in the further instructions.
Installation¶
1. Install NGINX stable and dependencies¶
These are the following options to install NGINX stable
from the NGINX repository:
-
Installation from the built package
sudo apt -y install curl gnupg2 ca-certificates lsb-release debian-archive-keyring echo "deb http://nginx.org/packages/debian `lsb_release -cs` nginx" | sudo tee /etc/apt/sources.list.d/nginx.list curl -fSsL https://nginx.org/keys/nginx_signing.key | sudo gpg --no-default-keyring --keyring gnupg-ring:/etc/apt/trusted.gpg.d/nginx.gpg --import sudo chmod 644 /etc/apt/trusted.gpg.d/nginx.gpg sudo apt update sudo apt -y install nginx
-
Install the dependencies required for NGINX stable:
sudo apt -y install curl gnupg2 ca-certificates lsb-release
-
Install NGINX stable:
echo "deb http://nginx.org/packages/ubuntu `lsb_release -cs` nginx" | sudo tee /etc/apt/sources.list.d/nginx.list curl -fsSL https://nginx.org/keys/nginx_signing.key | sudo apt-key add - sudo apt update sudo apt -y install nginx
-
If an EPEL repository is added in CentOS 7.x, please disable installation of NGINX stable from this repository by adding
exclude=nginx*
to the file/etc/yum.repos.d/epel.repo
.Example of the changed file
/etc/yum.repos.d/epel.repo
:[epel] name=Extra Packages for Enterprise Linux 7 - $basearch #baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch failovermethod=priority enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 exclude=nginx* [epel-debuginfo] name=Extra Packages for Enterprise Linux 7 - $basearch - Debug #baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch/debug metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7&arch=$basearch failovermethod=priority enabled=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 gpgcheck=1 [epel-source] name=Extra Packages for Enterprise Linux 7 - $basearch - Source #baseurl=http://download.fedoraproject.org/pub/epel/7/SRPMS metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch=$basearch failovermethod=priority enabled=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 gpgcheck=1
-
Install NGINX stable from the official repository:
echo -e '\n[nginx-stable] \nname=nginx stable repo \nbaseurl=http://nginx.org/packages/centos/$releasever/$basearch/ \ngpgcheck=1 \nenabled=1 \ngpgkey=https://nginx.org/keys/nginx_signing.key \nmodule_hotfixes=true' | sudo tee /etc/yum.repos.d/nginx.repo sudo yum install -y nginx
-
-
Compilation of the source code from the
stable
branch of the NGINX repository and installation with the same options.NGINX for AlmaLinux, Rocky Linux or Oracle Linux 8.x
This is the only option to install NGINX on AlmaLinux, Rocky Linux or Oracle Linux 8.x.
More detailed information about installation is available in the official NGINX documentation.
Installing on Amazon Linux 2.0.2021x and lower
To install NGINX Plus on Amazon Linux 2.0.2021x and lower, use the CentOS 7 instructions.
2. Add Wallarm repositories¶
Wallarm node is installed and updated from the Wallarm repositories. To add repositories, use the commands for your platform:
sudo apt -y install dirmngr
curl -fSsL https://repo.wallarm.com/wallarm.gpg | sudo gpg --no-default-keyring --keyring gnupg-ring:/etc/apt/trusted.gpg.d/wallarm.gpg --import
sudo chmod 644 /etc/apt/trusted.gpg.d/wallarm.gpg
sh -c "echo 'deb http://repo.wallarm.com/debian/wallarm-node bullseye/4.4/' | sudo tee /etc/apt/sources.list.d/wallarm.list"
sudo apt update
curl -fsSL https://repo.wallarm.com/wallarm.gpg | sudo apt-key add -
sh -c "echo 'deb http://repo.wallarm.com/ubuntu/wallarm-node bionic/4.4/' | sudo tee /etc/apt/sources.list.d/wallarm.list"
sudo apt update
curl -fsSL https://repo.wallarm.com/wallarm.gpg | sudo apt-key add -
sh -c "echo 'deb http://repo.wallarm.com/ubuntu/wallarm-node focal/4.4/' | sudo tee /etc/apt/sources.list.d/wallarm.list"
sudo apt update
curl -fsSL https://repo.wallarm.com/wallarm.gpg | sudo apt-key add -
sh -c "echo 'deb http://repo.wallarm.com/ubuntu/wallarm-node jammy/4.4/' | sudo tee /etc/apt/sources.list.d/wallarm.list"
sudo apt update
sudo yum install -y epel-release
sudo rpm -i https://repo.wallarm.com/centos/wallarm-node/7/4.4/x86_64/wallarm-node-repo-4.4-0.el7.noarch.rpm
sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
sudo rpm -i https://repo.wallarm.com/centos/wallarm-node/7/4.4/x86_64/wallarm-node-repo-4.4-0.el7.noarch.rpm
sudo yum install -y epel-release
sudo rpm -i https://repo.wallarm.com/centos/wallarm-node/8/4.4/x86_64/wallarm-node-repo-4.4-0.el8.noarch.rpm
3. Install Wallarm packages¶
Request processing and postanalytics on the same server¶
To run postanalytics and process the requests on the same server, the following packages are required:
-
nginx-module-wallarm
for the NGINX-Wallarm module -
wallarm-node
for the postanalytics module, Tarantool database, and additional NGINX-Wallarm packages
sudo apt -y install --no-install-recommends wallarm-node nginx-module-wallarm
sudo apt -y install --no-install-recommends wallarm-node nginx-module-wallarm
sudo yum install -y wallarm-node nginx-module-wallarm
sudo yum install -y wallarm-node nginx-module-wallarm
Request processing and postanalytics on different servers¶
To run postanalytics and process the requests on different servers, the following packages are required:
-
wallarm-node-nginx
andnginx-module-wallarm
for the NGINX-Wallarm modulesudo apt -y install --no-install-recommends wallarm-node-nginx nginx-module-wallarm
sudo apt -y install --no-install-recommends wallarm-node-nginx nginx-module-wallarm
sudo yum install -y wallarm-node-nginx nginx-module-wallarm
sudo yum install -y wallarm-node-nginx nginx-module-wallarm
-
wallarm-node-tarantool
on the separate server for the postanalytics module and Tarantool database (installation steps are described in the instructions)
4. Connect the Wallarm module¶
-
Open the file
/etc/nginx/nginx.conf
:sudo vim /etc/nginx/nginx.conf
-
Ensure that the
include /etc/nginx/conf.d/*;
line is added to the file. If there is no such line, add it. -
Add the following directive right after the
worker_processes
directive:load_module modules/ngx_http_wallarm_module.so;
Configuration example with the added directive:
user nginx; worker_processes auto; load_module modules/ngx_http_wallarm_module.so; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid;
-
Copy the configuration files for the system setup:
sudo cp /usr/share/doc/nginx-module-wallarm/examples/*.conf /etc/nginx/conf.d/
5. Connect the filtering node to Wallarm Cloud¶
The Wallarm node interacts with the Wallarm Cloud. To connect the filtering node to the Cloud:
-
If the postanalytics module installed separately:
- Copy the node token generated during the separate postanalytics module installation.
- Proceed to the 5th step in the list below. It is recommended to use one token for the node processing initial traffic and for the node performing postanalysis.
-
Make sure that your Wallarm account has the Administrator role enabled in Wallarm Console.
You can check mentioned settings by navigating to the users list in the US Cloud or EU Cloud.
-
Open Wallarm Console → Nodes in the US Cloud or EU Cloud and create the node of the Wallarm node type.
-
Copy the generated token.
-
Run the
register-node
script in a system with the filtering node:sudo /usr/share/wallarm-common/register-node -t <NODE_TOKEN> -H us1.api.wallarm.com
sudo /usr/share/wallarm-common/register-node -t <NODE_TOKEN>
<NODE_TOKEN>
is the copied token value.- You may add
-n <HOST_NAME>
parameter to set a custom name for your node instance. Final instance name will be:HOST_NAME_NodeUUID
.
Using one token for several installations
You can connect several Wallarm nodes to the Cloud using one token regardless of the selected deployment option. This option allows logical grouping of node instances in the Wallarm Console UI:
Below are some examples when you can choose to use one token for several installations:
- You deploy several Wallarm nodes to a development environment, each node is on its own machine owned by a certain developer
- The nodes for initial traffic processing and postanalytics modules are installed on separate servers - it is recommended to connect these modules to the Wallarm Cloud using the same node token
6. Update Wallarm node configuration¶
Main configuration files of NGINX and Wallarm filtering node are located in the directories:
-
/etc/nginx/conf.d/default.conf
with NGINX settings -
/etc/nginx/conf.d/wallarm.conf
with global filtering node settingsThe file is used for settings applied to all domains. To apply different settings to different domain groups, use the file
default.conf
or create new configuration files for each domain group (for example,example.com.conf
andtest.com.conf
). More detailed information about NGINX configuration files is available in the official NGINX documentation. -
/etc/nginx/conf.d/wallarm-status.conf
with Wallarm node monitoring settings. Detailed description is available within the link -
/etc/default/wallarm-tarantool
or/etc/sysconfig/wallarm-tarantool
with the Tarantool database settings
Request filtration mode¶
By default, the filtering node is in the status off
and does not analyze incoming requests. To enable requests analysis, please follow the steps:
-
Open the file
/etc/nginx/conf.d/default.conf
:sudo vim /etc/nginx/conf.d/default.conf
-
Add the line
wallarm_mode monitoring;
to thehttps
,server
orlocation
block:
Example of the file /etc/nginx/conf.d/default.conf
server {
# port for which requests are filtered
listen 80;
# domain for which requests are filtered
server_name localhost;
# Filtering node mode
wallarm_mode monitoring;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
When operating in the monitoring
mode, the filtering node searches attack signs in requests but does not block detected attacks. We recommend keeping the traffic flowing via the filtering node in the monitoring
mode for several days after the filtering node deployment and only then enable the block
mode. Learn recommendations on the filtering node operation mode setup →
Memory¶
Postanalytics module on the separate server
If you installed the postanalytics module on a separate server, then skip this step as you already have the module configured.
The Wallarm node uses the in-memory storage Tarantool. For production environments, the recommended amount of RAM allocated for Tarantool is 75% of the total server memory. If testing the Wallarm node or having a small instance size, the lower amount can be enough (e.g. 25% of the total memory).
To allocate memory for Tarantool:
-
Open the Tarantool configuration file in the editing mode:
sudo vim /etc/default/wallarm-tarantool
sudo vim /etc/default/wallarm-tarantool
sudo vim /etc/sysconfig/wallarm-tarantool
sudo vim /etc/sysconfig/wallarm-tarantool
-
Specify memory size in GB in the
SLAB_ALLOC_ARENA
directive. The value can be an integer or a float (a dot.
is a decimal separator).For example:
SLAB_ALLOC_ARENA=0.5
SLAB_ALLOC_ARENA=24
Detailed recommendations about allocating memory for Tarantool are described in these instructions.
-
To apply changes, restart Tarantool:
sudo systemctl restart wallarm-tarantool
Address of the separate postanalytics server¶
NGINX-Wallarm and postanalytics on the same server
If the NGINX-Wallarm and postanalytics modules are installed on the same server, then skip this step.
Add postanalytics server addresses to the file /etc/nginx/conf.d/wallarm.conf
:
upstream wallarm_tarantool {
server <ip1>:3313 max_fails=0 fail_timeout=0 max_conns=1;
server <ip2>:3313 max_fails=0 fail_timeout=0 max_conns=1;
keepalive 2;
}
# omitted
wallarm_tarantool_upstream wallarm_tarantool;
-
max_conns
value must be specified for each of the upstream Tarantool servers to prevent the creation of excessive connections. -
keepalive
value must not be lower than the number of the Tarantool servers. -
The
# wallarm_tarantool_upstream wallarm_tarantool;
string is commented by default - please delete#
.
Other configurations¶
To update other NGINX and Wallarm node configurations, use the NGINX documentation and the list of available Wallarm node directives.
7. Restart NGINX¶
Providing user with root
permission
If you are running NGINX as a user that does not have root
permission, then add this user to the wallarm
group using the following command:
usermod -aG wallarm <user_name>;
where <user_name>
is the name of the user without root
permission.
sudo systemctl restart nginx
sudo service nginx restart
sudo systemctl restart nginx
sudo systemctl restart nginx
8. Test Wallarm node operation¶
-
Send the request with test Path Traversal attack to a protected resource address:
curl http://localhost/etc/passwd
-
Open Wallarm Console → Events section in the US Cloud or EU Cloud and make sure the attack is displayed in the list.
Settings customization¶
The dynamic Wallarm module with default settings is installed for NGINX stable
. To customize Wallarm node settings, use the available directives.
Common customization options:
-
Using the balancer of the proxy server behind the filtering node
-
Limiting the single request processing time in the directive
wallarm_process_time_limit
-
Limiting the server reply waiting time in the NGINX directive
proxy_read_timeout
-
Limiting the maximum request size in the NGINX directive
client_max_body_size