Installing with Docker (Using the NGINX-Based Docker Image)¶
The filter node can be deployed as a Docker container. The Docker container is a fat one and contains all subsystems of the filter node.
The functionality of the filter node installed inside the Docker container is completely identical to the functionality of the other deployment options.
To deploy the filter node as a Docker container, you must:
Deploy the filter node.
Connect the filter node to the Wallarm cloud.
1. Deploy the Filter Node¶
Run one of the
docker run commands depending on the cloud in use:
docker run -d -e DEPLOY_USER="firstname.lastname@example.org" -e DEPLOY_PASSWORD="very_secret" -e NGINX_BACKEND=example.com -e TARANTOOL_MEMORY_GB=memvalue -p 80:80 wallarm/node:2.14
docker run -d -e WALLARM_API_HOST=us1.api.wallarm.com -e DEPLOY_USER="email@example.com" -e DEPLOY_PASSWORD="very_secret" -e NGINX_BACKEND=example.com -e TARANTOOL_MEMORY_GB=memvalue -p 80:80 wallarm/node:2.14
example.com— the protected resource.
memvalue– amount of memory allocated to Tarantool.
After running the command, you will have:
The protected resource on port 80.
You can also fine-tune the deployment by putting additional configuration files
inside the container.
2. Connect the Filter Node to the Wallarm Cloud¶
The filter node interacts with the Wallarm cloud located on a remote server.
To connect the filter node to the Wallarm cloud, you have the following options:
Using a prepared configuration file.
The filter node will try to automatically register itself in the Wallarm cloud on the first start.
If a filter node with the same name as the node's container identifier is already registered in the cloud, then the registration process will fail.
To avoid this, pass the
DEPLOY_FORCE=true environment variable to the container.
docker run -d -e DEPLOY_USER="firstname.lastname@example.org" -e DEPLOY_PASSWORD="very_secret" -e NGINX_BACKEND="IP address or FQDN" wallarm/node:2.14
If the registration process finishes successfully, then the container's
/etc/wallarm directory will be populated with the license file (
license.key), a file with the credentials for the filter node to access the cloud (
node.yaml), and other files required for proper node operation.
On the next start of the same filter node, registration will not be required. The filter node communicates with the cloud using the following artifacts acquired during the automatic registration:
secretvalues (they are placed in the
The Wallarm license key (it is placed in the
To connect the already registered filter node to the cloud, pass to its container
secretvalues via the environment variables and the
Use of Prepared Credentials¶
Pass to the filter node's container
secretvalues via the corresponding
NODE_SECRETenvironment variables, and
license.keyfile via Docker volumes.
Run one of the
docker run commands depending on the cloud in use:
docker run -d "NODE_UUID=00000000-0000-0000-0000-000000000000" -e NODE_SECRET="0000000000000000000000000000000000000000000000000000000000000000" -v /path/to/license.key:/etc/wallarm/license.key -e NGINX_BACKEND=192.168.xxx.1 wallarm/node:2.14
docker run -d -e WALLARM_API_HOST=us1.api.wallarm.com -e "NODE_UUID=00000000-0000-0000-0000-000000000000" -e NODE_SECRET="0000000000000000000000000000000000000000000000000000000000000000" -v /path/to/license.key:/etc/wallarm/license.key -e NGINX_BACKEND=192.168.xxx.1 wallarm/node:2.14
Use of a Prepared Configuration File Containing Credentials¶
Pass the following files to the filter node's container via Docker volumes:
node.yamlfile, containing the credentials for the filter node to access the Wallarm cloud, and
docker run -d -v /path/to/node.yaml:/etc/wallarm/node.yaml -v /path/to/license.key:/etc/wallarm/license.key -e NGINX_BACKEND=192.168.xxx.1 wallarm/node:2.14
3. Configure NGINX-Wallarm¶
The filter node configuration is done via the NGINX configuration file.
The use of container lets you go through a simplified configuration process
by using the environment variables. The simplified process is enabled by
NGINX_BACKEND environment variable.
NGINX_BACKEND— The backend address to which all incoming requests must be transferred. If the address does not have the
https://, prefix, then
http://is used by default. See details in proxy_pass.
Do not use the
NGINX_BACKENDvariable if you do need the simplified configuration process and if you use your own configuration files.
Note that without the
NGINX_BACKENDvariable, Wallarm will not start automatically. To start Wallarm, configure
wallarm_mode monitoring. See details in the
WALLARM_MODE: The NGINX-Wallarm module operation mode. See details in the
The directories used by NGINX:
/etc/nginx-wallarm/conf.d— common settings.
/etc/nginx-wallarm/sites-enabled— virtual host settings.
/var/www/html— static files.
4. Configure Logging¶
The logging is enabled by default.
The log directories are:
/var/log/nginx-wallarm/— NGINX logs.
/var/log/wallarm/— Wallarm logs.
Configure Extended Logging¶
Configure the filter node variables logging using NGINX. This will allow to perform a quick filter node diagnostics with the help of the NGINX log file.
Configure Log Rotation¶
By default, the logs rotate once every 24 hours.
Changing the rotation parameters through environment variables is not possible. To set up the log rotation, change the configuration files in
5. Configure Monitoring¶
To monitor the filter node, there are Nagios-compatible scripts inside the container. See details in Monitor the filter node.
Example of running the scripts:
docker exec -it wallarm-node /usr/lib/nagios-plugins/check_wallarm_tarantool_timeframe -w 1800 -c 900
docker exec -it wallarm-node /usr/lib/nagios-plugins/check_wallarm_export_delay -w 120 -c 300
The Installation Is Complete¶
Check that the filter node runs and filters the traffic. See Check the filter node operation.
A freshly installed filter node operates in blocking mode (see the
wallarm_mode directive description) by default.
The filter node may require some additional configuration after installation.
The document below lists a few of the typical setups that you can apply if needed.
To get more information about other available settings, proceed to the “Configuration” section of the Administrator’s Guide.
Configuring the Display of the Client's Real IP¶
If the filter node is deployed behind a proxy server or load balancer without any additional configuration, the request source address may not be equal to the actual IP address of the client. Instead, it may be equal to one of the IP addresses of the proxy server or the load balancer.
In this case, if you want the filter node to receive the client's IP address as a request source address, you need to perform an additional configuration of the proxy server or the load balancer.
Adding Wallarm Scanner Addresses to the Whitelist¶
The Wallarm scanner checks the resources of your company for vulnerabilities. Scanning is conducted using IP addresses from one of the following lists (depending on the type of Wallarm Cloud you are using):
If you are using the Wallarm scanner, you need to configure the whitelists on your network scope security software (such as firewalls, intrusion detection systems, etc.) to contain Wallarm scanner IP addresses.
For example, a Wallarm filter node with default settings is placed in the blocking mode, thus rendering the Wallarm scanner unable to scan the resources behind the filter node.
To make the scanner operational again, whitelist the scanner's IP addresses on this filter node.
Limiting the Single Request Processing Time¶
wallarm_process_time_limit Wallarm directive to specify the limit of the duration for processing a single request by the filter node.
If processing the request consumes more time than specified in the directive, then the information on the error is entered into the log file and the request is marked as an
Limiting the Server Reply Waiting Time¶
proxy_read_timeout NGINX directive to specify the timeout for reading the proxy server reply.
If the server sends nothing during this time, the connection is closed.
Limiting the Maximum Request Size¶
client_max_body_size NGINX directive to specify the limit for the maximum size of the body of the client's request.
If this limit is exceeded, NGINX replies to the client with the
Payload Too Large) code, also known as the
Request Entity Too Large message.
Blocking Requests by IP Address¶
The IP blocking functionality provides the following additional features:
If the WAF detects at least three different attack vectors from an IP address, the address is automatically added to the blacklist and blocked for 1 hour. If a similar behavior from the same IP address is detected again the IP is blocked for 2 hours, etc.
To enable IP blocking functionality, please select the configuration method at the Methods of Blocking by IP Address page and follow the appropriate instructions.