Skip to content

Deploying Wallarm OOB from the Docker Image

This article provides instructions for deploying Wallarm OOB using the NGINX-based Docker image. The solution described here is designed to analyze traffic mirrored by a web or proxy server.

Use cases

Among all supported Wallarm deployment options, NGINX-based Docker image is recommended for Wallarm deployment in these use cases:

  • If your organization utilizes Docker-based infrastructure, Wallarm Docker image is the ideal choice. It integrates effortlessly into your existing setup, whether you are employing a microservice architecture running on AWS ECS, Alibaba ECS, or other similar services. This solution also applies to those using virtual machines seeking a more streamlined management through Docker containers.

  • If you require fine-grained control over each container, the Docker image excels. It affords a greater level of resource isolation than typically possible with traditional VM-based deployments.

For more information on running Wallarm's NGINX-based Docker image on popular public cloud container orchestration services, refer to our guides: AWS ECS, GCP GCE, Azure Container Instances, Alibaba ECS.


  • Docker installed on your host system

  • Access to to download the Docker image. Please ensure the access is not blocked by a firewall

  • Access to the account with the Administrator role in Wallarm Console in the US Cloud or EU Cloud

  • Access to if working with US Wallarm Cloud or to if working with EU Wallarm Cloud. Please ensure the access is not blocked by a firewall

  • Access to the specified IP addresses on Google Cloud Storage. This access is crucial for downloading updates to attack detection rules, and retrieving exact IPs of countries, regions, or data centers you have added to your allowlist, denylist, or graylist

1. Configure traffic mirroring

Configure your environment to mirror incoming traffic to an instance with the Wallarm node you are deploying. For configuration details, we recommend to refer to documentation on the solution you are going to use to produce the traffic mirror (web server, proxy server, etc.).

Inside the link, you will find the example configuration for NGINX, Traefik, Envoy.

2. Prepare a configuration file for mirrored traffic analysis and more

To enable Wallarm nodes to analyze mirrored traffic, you need to configure additional settings in a separate file and mount it to the Docker container. The default configuration file that needs modification is located at /etc/nginx/sites-enabled/default within the Docker image.

In this file, you need to specify the Wallarm node configuration to process mirrored traffic and any other required settings. Follow these instructions to do so:

  1. Create the local NGINX configuration file named default with the following contents:

    server {
            listen 80 default_server;
            listen [::]:80 default_server ipv6only=on;
            #listen 443 ssl;
            server_name localhost;
            #ssl_certificate cert.pem;
            #ssl_certificate_key cert.key;
            root /usr/share/nginx/html;
            index index.html index.htm;
            wallarm_force server_addr $http_x_server_addr;
            wallarm_force server_port $http_x_server_port;
            # Change to the address of the mirroring server
            real_ip_header    X-Forwarded-For;
            real_ip_recursive on;
            wallarm_force response_status 0;
            wallarm_force response_time 0;
            wallarm_force response_size 0;
            wallarm_mode monitoring;
            location ~ ^/wallarm-apifw(.*)$ {
                    wallarm_mode off;
                    error_page 404 431         = @wallarm-apifw-fallback;
                    error_page 500 502 503 504 = @wallarm-apifw-fallback;
                    deny all;
            location @wallarm-apifw-fallback {
                    wallarm_mode off;
                    return 500 "API FW fallback";
            location / {
                    include proxy_params;
    • The set_real_ip_from and real_ip_header directives are required to have Wallarm Console display the IP addresses of the attackers.
    • The wallarm_force_response_* directives are required to disable analysis of all requests except for copies received from the mirrored traffic.
    • The wallarm_mode directive is the traffic analysis mode. Since malicious requests cannot be blocked, the only mode Wallarm accepts is monitoring. For in-line deployment, there are also safe blocking and blocking modes but even if you set the wallarm_mode directive to a value different from monitoring, the node continues to monitor traffic and only record malicious traffic (aside from the mode set to off).
  2. Specify any other required Wallarm directives. You can refer to the Wallarm configuration parameters documentation and the configuration use cases to understand what settings would be useful for you.

  3. If needed, modify other NGINX settings to customize its behavior. Consult the NGINX documentation for assistance.

You can also mount other files to the following container directories if necessary:

  • /etc/nginx/conf.d — common settings

  • /etc/nginx/sites-enabled — virtual host settings

  • /opt/wallarm/usr/share/nginx/html — static files

3. Get a token to connect the node to the Cloud

Get Wallarm token of the appropriate type:

  1. Open Wallarm Console → SettingsAPI tokens in the US Cloud or EU Cloud.
  2. Find or create API token with the Deploy source role.
  3. Copy this token.
  1. Open Wallarm Console → Nodes in the US Cloud or EU Cloud.
  2. Do one of the following:
    • Create the node of the Wallarm node type and copy the generated token.
    • Use existing node group - copy token using node's menu → Copy token.

4. Run the Docker container with the node

Run the Docker container with the node mounting the configuration file you have just created.

docker run -d -e WALLARM_API_TOKEN='XXXXXXX' -e WALLARM_LABELS='group=<GROUP>' -e WALLARM_API_HOST='' -v /configs/default:/etc/nginx/sites-enabled/default -p 80:80 wallarm/node:4.10.6-1
docker run -d -e WALLARM_API_TOKEN='XXXXXXX' -e WALLARM_LABELS='group=<GROUP>' -v /configs/default:/etc/nginx/sites-enabled/default -p 80:80 wallarm/node:4.10.6-1

The following environment variables should be passed to the container:

Environment variable Description Required
WALLARM_API_TOKEN Wallarm node or API token. Yes
WALLARM_API_HOST Wallarm API server:
  • for the US Cloud
  • for the EU Cloud
By default:

Available starting from node 4.6. Works only if WALLARM_API_TOKEN is set to API token with the Deploy role. Sets the group label for node instance grouping, for example:


...will place node instance into the <GROUP> instance group (existing, or, if does not exist, it will be created).

Yes (for API tokens)

5. Testing Wallarm node operation

  1. Send the request with test Path Traversal attack to a protected resource address:

    curl http://localhost/etc/passwd
  2. Open Wallarm Console → Attacks section in the US Cloud or EU Cloud and make sure the attack is displayed in the list.
    Attacks in the interface

Logging configuration

The logging is enabled by default. The log directories are:

Monitoring configuration

To monitor the filtering node, there are Nagios‑compatible scripts inside the container. See details in Monitoring the filtering node.

Example of running the scripts:

docker exec -it <WALLARM_NODE_CONTAINER_ID> /usr/lib/nagios/plugins/check_wallarm_tarantool_timeframe -w 1800 -c 900
docker exec -it <WALLARM_NODE_CONTAINER_ID> /usr/lib/nagios/plugins/check_wallarm_export_delay -w 120 -c 300
  • <WALLARM_NODE_CONTAINER_ID> is the ID of the running Wallarm Docker container. To get the ID, run docker ps and copy the proper ID.

Configuring the use cases

The configuration file mounted to the Docker container should describe the filtering node configuration in the available directives. Below are some commonly used filtering node configuration options: