Deployment of the Wallarm node Docker image to GCP¶
This quick guide provides the steps to deploy the Docker image of the NGINX-based Wallarm node to the Google Cloud Platform using the component Google Compute Engine (GCE).
The instructions limitations
These instructions do not cover the configuration of load balancing and node autoscaling. If setting up these components yourself, we recommend that you read the appropriate GCP documentation.
Requirements¶
-
Active GCP account
-
Compute Engine API enabled
-
Access to the account with the Administrator role in Wallarm Console for the US Cloud or EU Cloud
Options for the Wallarm node Docker container configuration¶
The filtering node configuration parameters should be passed to the deployed Docker container in one of the following ways:
-
In the environment variables. This option allows for the configuration of only basic filtering node parameters. Most directives cannot be configured through environment variables.
-
In the mounted configuration file. This option allows full filtering node configuration via any directives. With this configuration method, environment variables with the filtering node and Wallarm Cloud connection settings are also passed to the container.
Deploying the Wallarm node Docker container configured through environment variables¶
To deploy the containerized Wallarm filtering node configured only through environment variables, you can use the GCP Console or gcloud CLI. In these instructions, gcloud CLI is used.
-
Open Wallarm Console → Nodes in the US Cloud or EU Cloud and create the node of the Wallarm node type.
-
Copy the generated token.
-
Set the local environment variable with the Wallarm node token to be used to connect the instance to the Wallarm Cloud:
export WALLARM_API_TOKEN='<WALLARM_API_TOKEN>'
-
Create the instance with the running Docker container by using the
gcloud compute instances create-with-container
command:gcloud compute instances create-with-container <INSTANCE_NAME> \ --zone <DEPLOYMENT_ZONE> \ --tags http-server \ --container-env WALLARM_API_TOKEN=${WALLARM_API_TOKEN} \ --container-env NGINX_BACKEND=<HOST_TO_PROTECT_WITH_WALLARM> \ --container-env WALLARM_API_HOST=us1.api.wallarm.com \ --container-image registry-1.docker.io/wallarm/node:4.4.0-1
gcloud compute instances create-with-container <INSTANCE_NAME> \ --zone <DEPLOYMENT_ZONE> \ --tags http-server \ --container-env WALLARM_API_TOKEN=${WALLARM_API_TOKEN} \ --container-env NGINX_BACKEND=<HOST_TO_PROTECT_WITH_WALLARM> \ --container-image registry-1.docker.io/wallarm/node:4.4.0-1
<INSTANCE_NAME>
: name of the instance, for example:wallarm-node
.--zone
: zone that will host the instance.--tags
: instance tags. Tags are used to configure the availability of the instance for other resources. In the present case, the taghttp-server
opening port 80 is assigned to the instance.--container-image
: link to the Docker image of the filtering node.-
--container-env
: environment variables with the filtering node configuration (available variables are listed in the table below). Please note that it is not recommended to pass the value ofWALLARM_API_TOKEN
explicitly.Environment variable Description Required WALLARM_API_TOKEN
Wallarm node token. Using one token for several installations
You can use one token in several installations regardless of the selected platform. It allows logical grouping of node instances in the Wallarm Console UI. Example: you deploy several Wallarm nodes to a development environment, each node is on its own machine owned by a certain developer.
Yes NGINX_BACKEND
Domain or IP address of the resource to protect with the Wallarm solution. Yes WALLARM_API_HOST
Wallarm API server: us1.api.wallarm.com
for the US Cloudapi.wallarm.com
for the EU Cloud
api.wallarm.com
.No WALLARM_MODE
Node mode: block
to block malicious requestssafe_blocking
to block only those malicious requests originated from graylisted IP addressesmonitoring
to analyze but not block requestsoff
to disable traffic analyzing and processing
monitoring
.
Detailed description of filtration modes →No WALLARM_APPLICATION
Unique identifier of the protected application to be used in the Wallarm Cloud. The value can be a positive integer except for 0
.
Default value (if the variable is not passed to the container) is-1
which indicates the default application displayed in Wallarm Console → Settings → Application.
More details on setting up applications →No TARANTOOL_MEMORY_GB
Amount of memory allocated to Tarantool. The value can be an integer or a float (a dot .
is a decimal separator). By default: 0.2 gygabytes.No NGINX_PORT
Sets a port that NGINX will use inside the Docker container.
Starting from the Docker image4.0.2-1
, thewallarm-status
service automatically runs on the same port as NGINX.
Default value (if the variable is not passed to the container) is80
.
Syntax is-e NGINX_PORT='443'
.No DISABLE_IPV6
The variable with any value except for an empty one deletes the listen [::]:80 default_server ipv6only=on;
line from the NGINX configuration file which will stop NGINX from IPv6 connection processing.
If the variable is not specified explicitly or has an empty value""
, NGINX processes both IPv6 and IPv4 connections.No -
All parameters of the
gcloud compute instances create-with-container
command are described in the GCP documentation.
-
Open the GCP Console → Compute Engine → VM instances and ensure the instance is displayed in the list.
Deploying the Wallarm node Docker container configured through the mounted file¶
To deploy the containerized Wallarm filtering node configured through environment variables and mounted file, you should create the instance, locate the filtering node configuration file in this instance file system and run the Docker container in this instance. You can perform these steps via the GCP Console or gcloud CLI. In these instructions, gcloud CLI is used.
-
Open Wallarm Console → Nodes in the US Cloud or EU Cloud and create the node of the Wallarm node type.
-
Copy the generated token.
-
Create the instace based on any operating system image from the Compute Engine registry by using the
gcloud compute instances create
comand:gcloud compute instances create <INSTANCE_NAME> \ --image <PUBLIC_IMAGE_NAME> \ --zone <DEPLOYMENT_ZONE> \ --tags http-server
<INSTANCE_NAME>
: name of the instance.--image
: name of the operating system image from the Compute Engine registry. The created instance will be based on this image and will be used to run the Docker container later. If this parameter is omitted, the instance will be based on the Debian 10 image.--zone
: zone that will host the instance.--tags
: instance tags. Tags are used to configure the availability of the instance for other resources. In the present case, the taghttp-server
opening port 80 is assigned to the instance.- All parameters of the
gcloud compute instances create
command are described in the GCP documentation.
-
Open the GCP Console → Compute Engine → VM instances and ensure the instance is displayed in the list and is in the RUNNING status.
-
Connect to the instance via SSH following the GCP instructions.
-
Install the Docker packages in the instance following the instrauctions for an appropriate operating system.
-
Set the local environment variable with the Wallarm node token to be used to connect the instance to the Wallarm Cloud:
export WALLARM_API_TOKEN='<WALLARM_API_TOKEN>'
-
In the instance, create the directory with the file
default
containing the filtering node configuration (for example, the directory can be named asconfigs
). An example of the file with minimal settings:server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; #listen 443 ssl; server_name localhost; #ssl_certificate cert.pem; #ssl_certificate_key cert.key; root /usr/share/nginx/html; index index.html index.htm; wallarm_mode monitoring; # wallarm_application 1; location / { proxy_pass http://example.com; include proxy_params; } }
Set of filtering node directives that can be specified in the configuration file →
-
Run the Wallarm node Docker container by using the
docker run
command with passed environment variables and mounted configuration file:docker run -d -e WALLARM_API_TOKEN=${WALLARM_API_TOKEN} -e WALLARM_API_HOST='us1.api.wallarm.com' -v <INSTANCE_PATH_TO_CONFIG>:<DIRECTORY_FOR_MOUNTING> -p 80:80 wallarm/node:4.4.0-1
docker run -d -e WALLARM_API_TOKEN=${WALLARM_API_TOKEN} -v <INSTANCE_PATH_TO_CONFIG>:<CONTAINER_PATH_FOR_MOUNTING> -p 80:80 wallarm/node:4.4.0-1
<INSTANCE_PATH_TO_CONFIG>
: path to the configuration file created in the previous step. For example,configs
.-
<DIRECTORY_FOR_MOUNTING>
: directory of the container to mount the configuration file to. Configuration files can be mounted to the following container directories used by NGINX:/etc/nginx/conf.d
— common settings/etc/nginx/sites-enabled
— virtual host settings/var/www/html
— static files
The filtering node directives should be described in the
/etc/nginx/sites-enabled/default
file. -
-p
: port the filtering node listens to. The value should be the same as the instance port. -
-e
: environment variables with the filtering node configuration (available variables are listed in the table below). Please note that it is not recommended to pass the value ofWALLARM_API_TOKEN
explicitly.Environment variable Description Required WALLARM_API_TOKEN
Wallarm node token. Using one token for several installations
You can use one token in several installations regardless of the selected platform. It allows logical grouping of node instances in the Wallarm Console UI. Example: you deploy several Wallarm nodes to a development environment, each node is on its own machine owned by a certain developer.
Yes WALLARM_API_HOST
Wallarm API server: us1.api.wallarm.com
for the US Cloudapi.wallarm.com
for the EU Cloud
api.wallarm.com
.No
Testing the filtering node operation¶
-
Open the GCP Console → Compute Engine → VM instances and copy the instance IP address from the External IP column.
If the IP address is empty, please ensure the instance is in the RUNNING status.
-
Send the request with the test Path Traversal attack to the copied address:
curl http://<COPIED_IP>/etc/passwd
-
Open Wallarm Console → Events in the US Cloud or EU Cloud and make sure the attack is displayed in the list.
Details on errors that occurred during the container deployment are displayed in the View logs instance menu. If the instance is unavailable, please ensure required filtering node parameters with correct values are passed to the container.