Deploying with Docker

The filter node can be deployed as a Docker container. The Docker container is a fat one and contains all subsystems of the filter node.

The functionality of the filter node installed inside the Docker container is completely identical to the functionality of the other deployment options.

To deploy the filter node as a Docker container, you must:

  1. Deploy the filter node.
  2. Connect the filter node to the Wallarm cloud.
  3. Configure NGINX-Wallarm.
  4. Configure log rotation.
  5. Configure monitoring.

1. Deploy the Filter Node

Run the command depending on the Cloud you are using.

EU Cloud
US Cloud
# docker run -d -e DEPLOY_USER="deploy@example.com" -e DEPLOY_PASSWORD="very_secret" -e NGINX_BACKEND=example.com -e TARANTOOL_MEMORY_GB=memvalue -p 80:80 wallarm/node
# docker run -d -e WALLARM_API_HOST=us1.api.wallarm.com -e DEPLOY_USER="deploy@example.com" -e DEPLOY_PASSWORD="very_secret" -e NGINX_BACKEND=example.com -e TARANTOOL_MEMORY_GB=memvalue -p 80:80 wallarm/node

where:

After running the command, you will have:

  • The protected resource on port 80.
  • The filter node registered in the Wallarm cloud; the filter node displayed in the Wallarm interface.

You can also fine-tune the deployment by putting additional configuration files inside the container.

2. Connect the Filter Node to the Wallarm Cloud

The filter node interacts with the Wallarm cloud located on a remote server.

To connect the filter node to the Wallarm cloud, you have the following options:

  • Automatic registration.
  • Using credentials.
  • Using a prepared configuration file.

Automatic Registration

Transfer the environment variables DEPLOY_USER, DEPLOY_PASSWORD with the access credentials to https://my.wallarm.com

As a result, the deployed container will automatically register in the Wallarm cloud on the first start.

If the container with the set name is already registered in the Wallarm cloud, the operation will error out.

To avoid the error, use the DEPLOY_FORCE=true environment variable.

# docker run -d -e DEPLOY_USER="deploy@example.com" -e DEPLOY_PASSWORD="very_secret" -e NGINX_BACKEND=[ IP address ] wallarm/node

Using Credentials

To access the Wallarm cloud, the filter node uses uuid and secret that can be transferred in the environment variables NODE_UUID and NODE_SECRET.

Run the command depending on the Cloud you are using.

EU Cloud
US Cloud
# docker run -d -e "NODE_UUID=00000000-0000-0000-0000-000000000000" -e NODE_SECRET="0000000000000000000000000000000000000000000000000000000000000000" -e NGINX_BACKEND=93.184.216.34 wallarm/node
# docker run -d -e WALLARM_API_HOST=us1.api.wallarm.com -e "NODE_UUID=00000000-0000-0000-0000-000000000000" -e NODE_SECRET="0000000000000000000000000000000000000000000000000000000000000000" -e NGINX_BACKEND=93.184.216.34 wallarm/node

Using a Prepared Configuration File

If you have a prepared configuration file node.yaml, put it inside the container via the external volume:

# docker run -d -v /path/to/node.yaml:/etc/wallarm/node.yaml -e NGINX_BACKEND=93.184.216.34 wallarm/node

3. Configure NGINX-Wallarm

The filter node configuration is done via the NGINX configuration file.

The use of container lets you go through a simplified configuration process by using the environment variables. The simplified process is enabled by transferring the NGINX_BACKEND environment variable.

Simplified Process

  • NGINX_BACKEND — The backend address to which all incoming requests must be transferred. If the address does not have the http:// or https://, prefix, then http:// is used by default. See details in proxy_pass.

    Do not use the NGINX_BACKEND variable if you do need the simplified configuration process and if you use your own configuration files.

    Note that without the NGINX_BACKEND variable, Wallarm will not start automatically. To start Wallarm, configure wallarm_mode monitoring. See details in the wallarm_mode directive description in Wallarm configuration options.

  • WALLARM_MODE: The NGINX-Wallarm module operation mode. See details in the wallarm_mode directive description in Wallarm configuration options.

Configuration Files

The directories used by NGINX:

  • /etc/nginx-wallarm/conf.d — common settings.
  • /etc/nginx-wallarm/sites-enabled — virtual host settings.
  • /var/www/html — static files.

4. Configure Log Rotation

The logging is enabled by default.

The log directories are:

  • /var/log/nginx-wallarm/ — NGINX logs.
  • /var/log/wallarm/ — Wallarm logs.

By default, the logs rotate once every 24 hours.

Changing the rotation parameters through environment variables is not possible. To set up the log rotation, change the configuration files in /etc/logrotate.d/.

5. Configure Monitoring

To monitor the filter node, there are Nagios-compatible scripts inside the container. See details in Monitor the filter node.

Example of running the scripts:

# docker exec -it wallarm-node /usr/lib/nagios-plugins/check_wallarm_tarantool_timeframe -w 1800 -c 900
# docker exec -it wallarm-node /usr/lib/nagios-plugins/check_wallarm_export_delay -w 120 -c 300

The Installation Is Complete

Check that the filter node runs and filters the traffic. See Check the filter node operation.

Additional Settings

The filter node may require some additional configuration after installation.

The document below lists a few of the typical setups that you can apply if needed.

To get more information about other available settings, proceed to the “Configuration” section of the Administrator’s Guide.

Configuring the Display of the Client's Real IP

If the filter node is deployed behind a proxy server or load balancer without any additional configuration, the request source address may not be equal to the actual IP address of the client. Instead, it may be equal to one of the IP addresses of the proxy server or the load balancer.

In this case, if you want the filter node to receive the client's IP address as a request source address, you need to perform an additional configuration of the proxy server or the load balancer.

Configuring Extended Logging

You can configure the filter node variables logging using NGINX which will allow much faster filter node diagnostics with the help of the NGINX log file.

Adding Wallarm Scanner Addresses to the Whitelist

The Wallarm scanner checks the resources of your company for vulnerabilities. Scanning is conducted using IP addresses from one of the following lists (depending on the type of Wallarm Cloud you are using):

If you are using the Wallarm scanner, you need to configure the whitelists on your network scope security software (such as firewalls, intrusion detection systems, etc.) to contain Wallarm scanner IP addresses.

Limiting the Single Request Processing Time

Use the wallarm_process_time_limit Wallarm directive to specify the limit of the duration for processing a single request by the filter node.

If processing the request consumes more time than specified in the directive, then the information on the error is entered into the log file and the request is marked as an overlimit_res attack.

Limiting the Server Reply Waiting Time

Use the proxy_read_timeout NGINX directive to specify the timeout for reading the proxy server reply.

If the server sends nothing during this time, the connection is closed.

Limiting the Maximum Request Size

Use the client_max_body_size NGINX directive to specify the limit for the maximum size of the body of the client's request.

If this limit is exceeded, NGINX replies to the client with the 413 (Payload Too Large) code, also known as the Request Entity Too Large message.

results matching ""

    No results matching ""