Skip to content

Running Docker NGINX‑based image

The Wallarm NGINX-based filtering node can be deployed as a Docker container. The Docker container is fat and contains all subsystems of the filtering node.

The functionality of the filtering node installed inside the Docker container is completely identical to the functionality of the other deployment options.

If you deploy several Wallarm nodes

All Wallarm nodes deployed to your environment must be of the same versions. The postanalytics modules installed on separated servers must be of the same versions too.

Before installation of the additional node, please ensure its version matches the version of already deployed modules. If the deployed module version is deprecated or will be deprecated soon (4.0 or lower), upgrade all modules to the latest version.

To check the installed version, run the following command in the container:

apt list wallarm-node

Requirements

  • Access to the account with the Deploy or Administrator role and two‑factor authentication disabled in Wallarm Console in the US Cloud or EU Cloud

  • Access to https://us1.api.wallarm.com:444 if working with US Wallarm Cloud or to https://api.wallarm.com:444 if working with EU Wallarm Cloud. Please ensure the access is not blocked by a firewall

Options for running the container

The filtering node configuration parameters should be passed to the deployed Docker container in one of the following ways:

  • In the environment variables. This option allows for the configuration of only basic filtering node parameters. Most directives cannot be configured through environment variables.

  • In the mounted configuration file. This option allows full filtering node configuration via any directives. With this configuration method, environment variables with the filtering node and Wallarm Cloud connection settings are also passed to the container.

Run the container passing the environment variables

You can pass the following basic filtering node settings to the container via the option -e:

Environment variable Description Required
DEPLOY_USER Email to the Deploy or Administrator user account in Wallarm Console. Yes
DEPLOY_PASSWORD Password to the Deploy or Administrator user account in Wallarm Console. Yes
NGINX_BACKEND Domain or IP address of the resource to protect with the Wallarm solution. Yes
WALLARM_API_HOST Wallarm API server:
  • us1.api.wallarm.com for the US Cloud
  • api.wallarm.com for the EU Cloud
By default: api.wallarm.com.
No
WALLARM_MODE Node mode:
  • block to block malicious requests
  • safe_blocking to block only those malicious requests originated from graylisted IP addresses
  • monitoring to analyze but not block requests
  • off to disable traffic analyzing and processing
By default: monitoring.
Detailed description of filtration modes →
No
WALLARM_APPLICATION Unique identifier of the protected application to be used in the Wallarm Cloud. The value can be a positive integer except for 0.

Default value (if the variable is not passed to the container) is -1 which indicates the default application displayed in Wallarm Console → Settings → Application.

More details on setting up applications →

Support for the variable WALLARM_APPLICATION

The variable WALLARM_APPLICATION is supported only starting with the Docker image of version 3.4.1-1.

No
TARANTOOL_MEMORY_GB Amount of memory allocated to Tarantool. The value can be an integer or a float (a dot . is a decimal separator). By default: 0.2 gygabytes. No
DEPLOY_FORCE Replaces an existing Wallarm node with a new one if an existing Wallarm node name matches the identifier of the container you are running. The following values can be assigned to a variable:
  • true to replace the filtering node
  • false to disable the replacement of the filtering node
Default value (if the variable is not passed to the container) is false.
The Wallarm node name always matches the identifier of the container you are running. Filtering node replacement is helpful if the Docker container identifiers in your environment are static and you are trying to run another Docker container with the filtering node (for example, a container with a new version of the image). If in this case the variable value is false, the filtering node creation process will fail.
No
NGINX_PORT

Sets a port that NGINX will use inside the Docker container. This allows avoiding port collision when using this Docker container as a sidecar container within a pod of Kubernetes cluster.

Default value (if the variable is not passed to the container) is 80.

Syntax is NGINX_PORT='443'.

No

To run the image, use the command:

docker run -d -e DEPLOY_USER='deploy@example.com' -e DEPLOY_PASSWORD='very_secret' -e NGINX_BACKEND='example.com' -p 80:80 wallarm/node:3.6.2-1
docker run -d -e DEPLOY_USER='deploy@example.com' -e DEPLOY_PASSWORD='very_secret' -e NGINX_BACKEND='example.com' -e WALLARM_API_HOST='us1.api.wallarm.com' -p 80:80 wallarm/node:3.6.2-1

The command does the following:

  • Automatically creates new filtering node in the Wallarm Cloud. Created filtering node will be displayed in Wallarm Console → Nodes.

  • Creates the file default with minimal NGINX configuration and passes filtering node configuration in the /etc/nginx/sites-enabled container directory.

  • Creates files with filtering node credentials to access the Wallarm Cloud in the /etc/wallarm container directory:

    • node.yaml with filtering node UUID and secret key
    • license.key with Wallarm license key
  • Protects the resource http://NGINX_BACKEND:80.

Run the container mounting the configuration file

You can mount the prepared configuration file to the Docker container via the -v option. The file must contain the following settings:

See an example of the mounted file with minimal settings
server {
    listen 80 default_server;
    listen [::]:80 default_server ipv6only=on;
    #listen 443 ssl;

    server_name localhost;

    #ssl_certificate cert.pem;
    #ssl_certificate_key cert.key;

    root /usr/share/nginx/html;

    index index.html index.htm;

    wallarm_mode monitoring;
    # wallarm_application 1;

    location / {
            proxy_pass http://example.com;
            include proxy_params;
    }
}

To run the image:

  1. Pass required environment variables to the container via the -e option:

    Environment variable Description Required
    DEPLOY_USER Email to the Deploy or Administrator user account in Wallarm Console. Yes
    DEPLOY_PASSWORD Password to the Deploy or Administrator user account in Wallarm Console. Yes
    WALLARM_API_HOST Wallarm API server:
    • us1.api.wallarm.com for the US Cloud
    • api.wallarm.com for the EU Cloud
    By default: api.wallarm.com.
    No
    DEPLOY_FORCE Replaces an existing Wallarm node with a new one if an existing Wallarm node name matches the identifier of the container you are running. The following values can be assigned to a variable:
    • true to replace the filtering node
    • false to disable the replacement of the filtering node
    Default value (if the variable is not passed to the container) is false.
    The Wallarm node name always matches the identifier of the container you are running. Filtering node replacement is helpful if the Docker container identifiers in your environment are static and you are trying to run another Docker container with the filtering node (for example, a container with a new version of the image). If in this case the variable value is false, the filtering node creation process will fail.
    No
  2. Mount the directory with the configuration file default to the /etc/nginx/sites-enabled container directory via the -v option.

    docker run -d -e DEPLOY_USER='deploy@example.com' -e DEPLOY_PASSWORD='very_secret' -v /configs/default:/etc/nginx/sites-enabled/default -p 80:80 wallarm/node:3.6.2-1
    
    docker run -d -e DEPLOY_USER='deploy@example.com' -e DEPLOY_PASSWORD='very_secret' -e WALLARM_API_HOST='us1.api.wallarm.com' -v /configs/default:/etc/nginx/sites-enabled/default -p 80:80 wallarm/node:3.6.2-1
    

The command does the following:

  • Automatically creates new filtering node in Wallarm Cloud. Created filtering node will be displayed in Wallarm Console → Nodes.

  • Mounts the file default into the /etc/nginx/sites-enabled container directory.

  • Creates files with filtering node credentials to access Wallarm Cloud in the /etc/wallarm container directory:

    • node.yaml with filtering node UUID and secret key
    • license.key with Wallarm license key
  • Protects the resource http://example.com.

Mounting other configuration files

The container directories used by NGINX:

  • /etc/nginx/conf.d — common settings
  • /etc/nginx/sites-enabled — virtual host settings
  • /var/www/html — static files

If required, you can mount any files to the listed container directories. The filtering node directives should be described in the /etc/nginx/sites-enabled/default file.

Logging configuration

The logging is enabled by default. The log directories are:

  • /var/log/nginx — NGINX logs

  • /var/log/wallarm — Wallarm node logs

To configure extended logging of the filtering node variables, please use these instructions.

By default, the logs rotate once every 24 hours. To set up the log rotation, change the configuration files in /etc/logrotate.d/. Changing the rotation parameters through environment variables is not possible.

Monitoring configuration

To monitor the filtering node, there are Nagios‑compatible scripts inside the container. See details in Monitoring the filtering node.

Example of running the scripts:

docker exec -it <WALLARM_NODE_CONTAINER_ID> /usr/lib/nagios/plugins/check_wallarm_tarantool_timeframe -w 1800 -c 900
docker exec -it <WALLARM_NODE_CONTAINER_ID> /usr/lib/nagios/plugins/check_wallarm_export_delay -w 120 -c 300
  • <WALLARM_NODE_CONTAINER_ID> is the ID of the running Wallarm Docker container. To get the ID, run docker ps and copy the proper ID.

Testing Wallarm node operation

  1. Send the request with test SQLI and XSS attacks to the protected resource address:

    curl http://localhost/?id='or+1=1--a-<script>prompt(1)</script>'
    
  2. Open the Wallarm Console → Attacks section in the US Cloud or EU Cloud and ensure attacks are displayed in the list.
    Attacks in the interface

Configuring the use cases

The configuration file mounted to the Docker container should describe the filtering node configuration in the available directive. Below are some commonly used filtering node configuration options: