Skip to content

Deployment of the Wallarm node Docker image to Alibaba Cloud

This quick guide provides the steps to deploy the Docker image of the NGINX-based Wallarm node to the Alibaba Cloud platform using the Alibaba Cloud Elastic Compute Service (ECS).

The instructions limitations

These instructions do not cover the configuration of load balancing and node autoscaling. If setting up these components yourself, we recommend that you read the appropriate Alibaba Cloud documentation.

Requirements

  • Access to the Alibaba Cloud Console

  • Access to the account with the Administrator or Deploy role and two‑factor authentication disabled in Wallarm Console for the EU Cloud or US Cloud

Options for the Wallarm node Docker container configuration

The filtering node configuration parameters should be passed to the deployed Docker container in one of the following ways:

  • In the environment variables. This option allows for the configuration of only basic filtering node parameters. Most directives cannot be configured through environment variables.

  • In the mounted configuration file. This option allows full filtering node configuration via any directives. With this configuration method, environment variables with the filtering node and Wallarm Cloud connection settings are also passed to the container.

Deploying the Wallarm node Docker container configured through environment variables

To deploy the containerized Wallarm filtering node configured only through environment variables, you should create the Alibaba Cloud instance and run the Docker container in this instance. You can perform these steps via the Alibaba Cloud Console or Alibaba Cloud CLI. In these instructions, Alibaba Cloud Console is used.

  1. Open the Alibaba Cloud Console → the list of services → Elastic Compute ServiceInstances.

  2. Create the instance following the Alibaba Cloud instructions and the guidelines below:

    • The instance can be based on the image of any operating system.
    • Since the instance should be available for external resources, public IP address or domain should be configured in the instance settings.
    • The instance settings should reflect the method used to connect to the instance.
  3. Connect to the instance by one of the methods described in the Alibaba Cloud documentation.

  4. Install the Docker packages in the instance following the instructions for an appropriate operating system.

  5. Set instance environment variables with email and password used for authentication in the Wallarm Cloud:

    export DEPLOY_USER='<DEPLOY_USER>'
    export DEPLOY_PASSWORD='<DEPLOY_PASSWORD>'
    
    • <DEPLOY_USER>: email to the Deploy or Administrator user account in Wallarm Console.
    • <DEPLOY_PASSWORD>: password to the Deploy or Administrator user account in Wallarm Console.
  6. Run the Wallarm node Docker container by using the docker run command with passed environment variables and mounted configuration file:

    docker run -d -e DEPLOY_USER=${DEPLOY_USER} -e DEPLOY_PASSWORD=${DEPLOY_PASSWORD} -e NGINX_BACKEND=<HOST_TO_PROTECT_WITH_WALLARM> -p 80:80 wallarm/node:3.6.2-1
    
    docker run -d -e DEPLOY_USER=${DEPLOY_USER} -e DEPLOY_PASSWORD=${DEPLOY_PASSWORD} -e NGINX_BACKEND=<HOST_TO_PROTECT_WITH_WALLARM> -e WALLARM_API_HOST='us1.api.wallarm.com' -p 80:80 wallarm/node:3.6.2-1
    
    • -p: port the filtering node listens to. The value should be the same as the instance port.
    • -e: environment variables with the filtering node configuration (available variables are listed in the table below). Please note that it is not recommended to pass the values of DEPLOY_USER and DEPLOY_PASSWORD explicitly.

      Environment variable Description Required
      DEPLOY_USER Email to the Deploy or Administrator user account in Wallarm Console. Yes
      DEPLOY_PASSWORD Password to the Deploy or Administrator user account in Wallarm Console. Yes
      NGINX_BACKEND Domain or IP address of the resource to protect with the Wallarm solution. Yes
      WALLARM_API_HOST Wallarm API server:
      • us1.api.wallarm.com for the US Cloud
      • api.wallarm.com for the EU Cloud
      By default: api.wallarm.com.
      No
      WALLARM_MODE Node mode:
      • block to block malicious requests
      • safe_blocking to block only those malicious requests originated from graylisted IP addresses
      • monitoring to analyze but not block requests
      • off to disable traffic analyzing and processing
      By default: monitoring.
      Detailed description of filtration modes →
      No
      WALLARM_APPLICATION Unique identifier of the protected application to be used in the Wallarm Cloud. The value can be a positive integer except for 0.

      Default value (if the variable is not passed to the container) is -1 which indicates the default application displayed in Wallarm Console → Settings → Application.

      More details on setting up applications →

      Support for the variable WALLARM_APPLICATION

      The variable WALLARM_APPLICATION is supported only starting with the Docker image of version 3.4.1-1.

      No
      TARANTOOL_MEMORY_GB Amount of memory allocated to Tarantool. The value can be an integer or a float (a dot . is a decimal separator). By default: 0.2 gygabytes. No
      DEPLOY_FORCE Replaces an existing Wallarm node with a new one if an existing Wallarm node name matches the identifier of the container you are running. The following values can be assigned to a variable:
      • true to replace the filtering node
      • false to disable the replacement of the filtering node
      Default value (if the variable is not passed to the container) is false.
      The Wallarm node name always matches the identifier of the container you are running. Filtering node replacement is helpful if the Docker container identifiers in your environment are static and you are trying to run another Docker container with the filtering node (for example, a container with a new version of the image). If in this case the variable value is false, the filtering node creation process will fail.
      No
      NGINX_PORT

      Sets a port that NGINX will use inside the Docker container. This allows avoiding port collision when using this Docker container as a sidecar container within a pod of Kubernetes cluster.

      Default value (if the variable is not passed to the container) is 80.

      Syntax is NGINX_PORT='443'.

      No
  7. Test the filtering node operation.

Deploying the Wallarm node Docker container configured through the mounted file

To deploy the containerized Wallarm filtering node configured through environment variables and mounted file, you should create the Alibaba Cloud instance, locate the filtering node configuration file in this instance file system and run the Docker container in this instance. You can perform these steps via the Alibaba Cloud Console or Alibaba Cloud CLI. In these instructions, Alibaba Cloud Console is used.

  1. Open the Alibaba Cloud Console → the list of services → Elastic Compute ServiceInstances.

  2. Create the instance following the Alibaba Cloud instructions and the guidelines below:

    • The instance can be based on the image of any operating system.
    • Since the instance should be available for external resources, public IP address or domain should be configured in the instance settings.
    • The instance settings should reflect the method used to connect to the instance.
  3. Connect to the instance by one of the methods described in the Alibaba Cloud documentation.

  4. Install the Docker packages in the instance following the instructions for an appropriate operating system.

  5. Set instance environment variables with email and password used for authentication in the Wallarm Cloud:

    export DEPLOY_USER='<DEPLOY_USER>'
    export DEPLOY_PASSWORD='<DEPLOY_PASSWORD>'
    
    • <DEPLOY_USER>: email to the Deploy or Administrator user account in Wallarm Console.
    • <DEPLOY_PASSWORD>: password to the Deploy or Administrator user account in Wallarm Console.
  6. In the instance, create the directory with the file default containing the filtering node configuration (for example, the directory can be named as configs). An example of the file with minimal settings:

    server {
        listen 80 default_server;
        listen [::]:80 default_server ipv6only=on;
        #listen 443 ssl;
    
        server_name localhost;
    
        #ssl_certificate cert.pem;
        #ssl_certificate_key cert.key;
    
        root /usr/share/nginx/html;
    
        index index.html index.htm;
    
        wallarm_mode monitoring;
        # wallarm_application 1;
    
        location / {
                proxy_pass http://example.com;
                include proxy_params;
        }
    }
    

    Set of filtering node directives that can be specified in the configuration file →

  7. Run the Wallarm node Docker container by using the docker run command with passed environment variables and mounted configuration file:

    docker run -d -e DEPLOY_USER=${DEPLOY_USER} -e DEPLOY_PASSWORD=${DEPLOY_PASSWORD} -v <INSTANCE_PATH_TO_CONFIG>:<CONTAINER_PATH_FOR_MOUNTING> -p 80:80 wallarm/node:3.6.2-1
    
    docker run -d -e DEPLOY_USER=${DEPLOY_USER} -e DEPLOY_PASSWORD=${DEPLOY_PASSWORD} -e WALLARM_API_HOST='us1.api.wallarm.com' -v <INSTANCE_PATH_TO_CONFIG>:<DIRECTORY_FOR_MOUNTING> -p 80:80 wallarm/node:3.6.2-1
    
    • <INSTANCE_PATH_TO_CONFIG>: path to the configuration file created in the previous step. For example, configs.
    • <DIRECTORY_FOR_MOUNTING>: directory of the container to mount the configuration file to. Configuration files can be mounted to the following container directories used by NGINX:

      • /etc/nginx/conf.d — common settings
      • /etc/nginx/sites-enabled — virtual host settings
      • /var/www/html — static files

      The filtering node directives should be described in the /etc/nginx/sites-enabled/default file.

    • -p: port the filtering node listens to. The value should be the same as the instance port.

    • -e: environment variables with the filtering node configuration (available variables are listed in the table below). Please note that it is not recommended to pass the values of DEPLOY_USER and DEPLOY_PASSWORD explicitly.

      Environment variable Description Required
      DEPLOY_USER Email to the Deploy or Administrator user account in Wallarm Console. Yes
      DEPLOY_PASSWORD Password to the Deploy or Administrator user account in Wallarm Console. Yes
      WALLARM_API_HOST Wallarm API server:
      • us1.api.wallarm.com for the US Cloud
      • api.wallarm.com for the EU Cloud
      By default: api.wallarm.com.
      No
      DEPLOY_FORCE Replaces an existing Wallarm node with a new one if an existing Wallarm node name matches the identifier of the container you are running. The following values can be assigned to a variable:
      • true to replace the filtering node
      • false to disable the replacement of the filtering node
      Default value (if the variable is not passed to the container) is false.
      The Wallarm node name always matches the identifier of the container you are running. Filtering node replacement is helpful if the Docker container identifiers in your environment are static and you are trying to run another Docker container with the filtering node (for example, a container with a new version of the image). If in this case the variable value is false, the filtering node creation process will fail.
      No
  8. Test the filtering node operation.

Testing the filtering node operation

  1. Open the Alibaba Cloud Console → the list of services → Elastic Compute ServiceInstances and copy the public IP address of the instance from the IP address column.

    Settig up container instance

    If the IP address is empty, please ensure the instance is in the Running status.

  2. Send the request with test SQLI and XSS attacks to the copied address:

    curl http://<COPIED_IP>/?id='or+1=1--a-<script>prompt(1)</script>'
    
  3. Open the Wallarm Console → Events section in the EU Cloud or US Cloud and ensure attacks are displayed in the list.
    Attacks in UI

To view details on errors that occurred during the container deployment, please connect to the instance by one of the methods and review the container logs. If the instance is unavailable, please ensure required filtering node parameters with correct values are passed to the container.