Deploying Wallarm OOB from the Docker Image¶
This article provides instructions for deploying Wallarm OOB using the NGINX-based Docker image. The solution described here is designed to analyze traffic mirrored by a web or proxy server.
Use cases¶
Among all supported Wallarm deployment options, NGINX-based Docker image is recommended for Wallarm deployment in these use cases:
-
If your organization utilizes Docker-based infrastructure, Wallarm Docker image is the ideal choice. It integrates effortlessly into your existing setup, whether you are employing a microservice architecture running on AWS ECS, Alibaba ECS, or other similar services. This solution also applies to those using virtual machines seeking a more streamlined management through Docker containers.
-
If you require fine-grained control over each container, the Docker image excels. It affords a greater level of resource isolation than typically possible with traditional VM-based deployments.
For more information on running Wallarm's NGINX-based Docker image on popular public cloud container orchestration services, refer to our guides: AWS ECS, GCP GCE, Azure Container Instances, Alibaba ECS.
Requirements¶
-
Docker installed on your host system
-
Access to
https://hub.docker.com/r/wallarm/node
to download the Docker image. Please ensure the access is not blocked by a firewall -
Access to the account with the Administrator role in Wallarm Console in the US Cloud or EU Cloud
-
Access to
https://us1.api.wallarm.com
if working with US Wallarm Cloud or tohttps://api.wallarm.com
if working with EU Wallarm Cloud. Please ensure the access is not blocked by a firewall -
Access to the IP addresses below for downloading updates to attack detection rules and API specifications, as well as retrieving precise IPs for your allowlisted, denylisted, or graylisted countries, regions, or data centers
1. Configure traffic mirroring¶
Configure your environment to mirror incoming traffic to an instance with the Wallarm node you are deploying. For configuration details, we recommend to refer to documentation on the solution you are going to use to produce the traffic mirror (web server, proxy server, etc.).
Inside the link, you will find the example configuration for NGINX, Traefik, Envoy.
2. Prepare a configuration file for mirrored traffic analysis and more¶
To enable Wallarm nodes to analyze mirrored traffic, you need to configure additional settings in a separate file and mount it to the Docker container. The default configuration file that needs modification is located at /etc/nginx/sites-enabled/default
within the Docker image.
In this file, you need to specify the Wallarm node configuration to process mirrored traffic and any other required settings. Follow these instructions to do so:
-
Create the local NGINX configuration file named
default
with the following contents:server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; #listen 443 ssl; server_name localhost; #ssl_certificate cert.pem; #ssl_certificate_key cert.key; root /usr/share/nginx/html; index index.html index.htm; wallarm_force server_addr $http_x_server_addr; wallarm_force server_port $http_x_server_port; # Change 222.222.222.22 to the address of the mirroring server set_real_ip_from 222.222.222.22; real_ip_header X-Forwarded-For; real_ip_recursive on; wallarm_force response_status 0; wallarm_force response_time 0; wallarm_force response_size 0; wallarm_mode monitoring; location / { proxy_pass http://example.com; include proxy_params; } }
- The
set_real_ip_from
andreal_ip_header
directives are required to have Wallarm Console display the IP addresses of the attackers. - The
wallarm_force_response_*
directives are required to disable analysis of all requests except for copies received from the mirrored traffic. - The
wallarm_mode
directive is the traffic analysis mode. Since malicious requests cannot be blocked, the only mode Wallarm accepts is monitoring. For in-line deployment, there are also safe blocking and blocking modes but even if you set thewallarm_mode
directive to a value different from monitoring, the node continues to monitor traffic and only record malicious traffic (aside from the mode set to off).
- The
-
Specify any other required Wallarm directives. You can refer to the Wallarm configuration parameters documentation and the configuration use cases to understand what settings would be useful for you.
-
If needed, modify other NGINX settings to customize its behavior. Consult the NGINX documentation for assistance.
You can also mount other files to the following container directories if necessary:
-
/etc/nginx/conf.d
— common settings -
/etc/nginx/sites-enabled
— virtual host settings -
/opt/wallarm/usr/share/nginx/html
— static files
3. Get a token to connect the node to the Cloud¶
Get Wallarm token of the appropriate type:
4. Run the Docker container with the node¶
Run the Docker container with the node mounting the configuration file you have just created.
The following environment variables should be passed to the container:
Environment variable | Description | Required |
---|---|---|
WALLARM_API_TOKEN | Wallarm node or API token. | Yes |
WALLARM_API_HOST | Wallarm API server:
api.wallarm.com . | No |
WALLARM_LABELS | Available starting from node 4.6. Works only if
...will place node instance into the | Yes (for API tokens) |
5. Testing Wallarm node operation¶
-
Send the request with test Path Traversal attack to a protected resource address:
-
Open Wallarm Console → Attacks section in the US Cloud or EU Cloud and make sure the attack is displayed in the list.
Logging configuration¶
The logging is enabled by default. The log directories are:
-
/var/log/nginx
— NGINX logs -
/opt/wallarm/var/log/wallarm
— Wallarm node logs
Monitoring configuration¶
To monitor the filtering node, there are Nagios‑compatible scripts inside the container. See details in Monitoring the filtering node.
Example of running the scripts:
docker exec -it <WALLARM_NODE_CONTAINER_ID> /usr/lib/nagios/plugins/check_wallarm_tarantool_timeframe -w 1800 -c 900
docker exec -it <WALLARM_NODE_CONTAINER_ID> /usr/lib/nagios/plugins/check_wallarm_export_delay -w 120 -c 300
<WALLARM_NODE_CONTAINER_ID>
is the ID of the running Wallarm Docker container. To get the ID, rundocker ps
and copy the proper ID.
Configuring the use cases¶
The configuration file mounted to the Docker container should describe the filtering node configuration in the available directives. Below are some commonly used filtering node configuration options:
-
Using the balancer of the proxy server behind the filtering node
-
Limiting the single request processing time in the directive
wallarm_process_time_limit
-
Limiting the server reply waiting time in the NGINX directive
proxy_read_timeout
-
Limiting the maximum request size in the NGINX directive
client_max_body_size