Deploying on Microsoft Azure

Azure Marketplace provides an deployment-ready Linux image with pre-installed filter node software.

To deploy a filter node on Microsoft Azure cloud, do the following steps:

  1. Create an SSH key pair.
  2. Log in to Microsoft Azure portal.
  3. Create and run a virtual machine with a filter node software.
  4. Connect to the virtual machine via SSH protocol.
  5. Set up the filter node for using a proxy server.
  6. Connect the filter node to the Wallarm cloud.
  7. Set up the proxying and filtering rules for the filter node.
  8. Tune the memory allocation policy for the filter node.
  9. Restart NGINX.

1. Create an SSH Key Pair

During the deployment process you will connect to a virtual machine using the SSH protocol.

The Azure cloud defines two means of getting authenticated while using the SSH protocol: either by login and password or by SSH key pair. Authentication with SSH key pair is considered to be more secure compared to login and password authentication method. Azure uses SSH key pair for authentication by default.

Create an SSH RSA key pair. For example, you could use ssh-keygen or PuTTYgen tools.

SSH keys generating with PuTTYgen

See How to use SSH keys with Windows on Azure for more information.

2. Log in to Microsoft Azure Portal

Log in to the Azure portal.

3. Create and Run a Virtual Machine with a Filter Node Software

To create a virtual machine with a filter node software, do the following:

  1. In the upper left corner of the Azure portal homepage select Create a resource.

  2. Search for “wallarm” in the search bar.

    Resource search

  3. Select “Wallarm - Next-Gen Web Application Firewall”.

    The Wallarm product description page will open.

    Wallarm product description

    Alternatively, you could reach the same page using Azure Marketplace. To do that, go to the link and select Get it now.

    Wallarm on Azure Marketplace

  4. Select Create to open a “Create a virtual machine” wizard.

  5. In the “Basics” tab select the correct subscription (from your Azure account), set the name and the size of a virtual machine.

    Virtual machine wizard: basics

  6. Choose an authentication method to be used with the VM.

    If you choose SSH key pair as an authentication method, provide a user name and the public SSH key you have created earlier.

    Virtual machine wizard: authentication

  7. Set up other necessary virtual machine parameters.

  8. Select Review + Create. Make sure everything is set up correctly.

    Virtual machine wizard: review

  9. Select Create to start the virtual machine deployment.

  10. After the completion of deployment, select Go to resource.

    Virtual machine deployment process

See Quickstart: Create a Linux virtual machine in the Azure portal for more information.

4. Connect to the Virtual Machine via SSH Protocol

Select Connect on the virtual machine overview screen to view the IP address and SSH port values. If necessary, change them to appropriate values.

Setting up connection parameters

Connect to the virtual machine via SSH protocol using the private SSH key you have created earlier.

See How to use SSH keys with Windows on Azure for more information.

5. Set up the Filter Node for Using a Proxy Server

This setup step is intended for users who use their own proxy server for the operation of the protected web applications.

If you do not use a proxy server, skip this step of the setup.

You need to assign new values to the environment variables, which define the proxy server used, to configure Wallarm node for using your proxy server.

Add new values of the environment variables to the /etc/environment file:

  • Add https_proxy to define a proxy for the https protocol.
  • Add http_proxy to define a proxy for the http protocol.
  • Add no_proxy to define the list of the resources proxy should not be used for.

Assign the <scheme>://<proxy_user>:<proxy_pass>@<host>:<port> string values to the https_proxy and http_proxy variables.

  • <scheme> defines the protocol used. It should match the protocol that the current environment variable sets up proxy for.
  • <proxy_user> defines the username for proxy authorization.
  • <proxy_pass> defines the password for proxy authorization.
  • <host> defines a host of the proxy server.
  • <port> defines a port of the proxy server.

Assign a "<res_1>, <res_2>, <res_3>, <res_4>, ..." array value, where <res_1>, <res_2>, <res_3>, and <res_4> are the IP addresses and/or domains, to the no_proxy variable to define a list of the resources which proxy should not be used for. This array should consist of IP addresses and/or domains.

Resources that need to be addressed without a proxy

Add the following IP addresses and domain to the list of the resources that have to be addressed without a proxy for the system to operate correctly: 127.0.0.1, 127.0.0.8, 127.0.0.9, and localhost.

The 127.0.0.8 and 127.0.0.9 IP addresses are used for the operation of the Wallarm filter node.

The example of the correct /etc/environment file contents below demonstrates the following configuration:

  • HTTPS and HTTP requests are proxied to the 1.2.3.4 host with the 1234 port, using the admin username and the 01234 password for authorization on the proxy server.
  • Proxying is disabled for the requests sent to 127.0.0.1, 127.0.0.8, 127.0.0.9, and localhost.
https_proxy=http://admin:01234@1.2.3.4:1234
http_proxy=http://admin:01234@1.2.3.4:1234
no_proxy="127.0.0.1, 127.0.0.8, 127.0.0.9, localhost"

6. Connect the Filter Node to the Wallarm Cloud

The filter node interacts with the Wallarm cloud. There are two ways of connecting the node to the cloud:

Required access rights

Make sure that your Wallarm account has the Administrator role enabled and two-factor authentication disabled, therefore allowing you to connect a filter node to the cloud.

You can check the aforementioned parameters by navigating to the user account list in the Wallarm console.

User list in Wallarm console

Connecting Using the Filter Node Token

To connect the node to the cloud using the token, proceed with the following steps:

  1. Create a new node on the Nodes tab of Wallarm web interface.
    1. Click the Create new node button.
    2. In the form that appears, enter the node name into the corresponding field and select the “Cloud” type of installation from the drop-down list.
    3. Click the Create button.
  2. In the window that appears, click the Copy button next to the field with the token to add the token of the newly created filter node to your clipboard.
  3. On the virtual machine run the addcloudnode script:

    You have to pick which script to run depending on the Cloud you are using.

    EU Cloud
    US Cloud
    # /usr/share/wallarm-common/addcloudnode
    
    # /usr/share/wallarm-common/addcloudnode -H us1.api.wallarm.com
    

  4. Paste the filter node token from your clipboard.

Your filter node will now synchronize with the cloud every 5 seconds according to the default synchronization configuration.

Node and cloud synchronization configuration

After running the addcloudnode script, the /etc/wallarm/syncnode file containing the node and cloud synchronization settings will be created.

To learn more about synchronization configuration file content, proceed to the link.

Connecting Using Your Cloud Account Login and Password

To connect the node to the cloud using your cloud account requisites, proceed with the following steps:

  1. On the virtual machine run the addnode script:

    You have to pick which script to run depending on the Cloud you are using.

    EU Cloud
    US Cloud
    # /usr/share/wallarm-common/addnode
    
    # /usr/share/wallarm-common/addnode -H us1.api.wallarm.com
    

  2. Provide your Wallarm account’s login and password when prompted.

API Access

The API choice for your filter node depends on the Cloud you are using. Please, select the API accordingly:

Ensure the access is not blocked by a firewall.

7. Set up the Proxying and Filtering Rules for the Filter Node

The etc/nginx/conf.d directory contains NGINX and Wallarm filter node configuration files.

By default, this directory contains the following configuration files:

  • The default.conf file defines the configuration of NGINX.
  • The wallarm.conf file defines the global configuration of Wallarm filter node.
  • The wallarm-status.conf file defines the Wallarm monitoring configuration.

You can create your own configuration files to define the operation of NGINX and Wallarm. It is recommended to create a separate configuration file with the server block for each group of the domains that should be processed in the same way.

To see detailed information about working with NGINX configuration files, proceed to the official NGINX documentation.

Wallarm directives define the operation logic of the Wallarm filter node. To see the list of Wallarm directives available, proceed to the Wallarm configuration options page.

A Configuration File Example

Let us suppose that you need to configure the server to work in the following conditions:

  • Only HTTP traffic is processed. There are no HTTPS requests processed.
  • The following domains receive the requests: example.com and www.example.com.
  • All requests must be passed to the server 10.80.0.5.
  • All incoming requests are considered less than 1MB in size (default setting).
  • The processing of a request takes no more than 60 seconds (default setting).
  • Wallarm must operate in the monitor mode.
  • Clients access the filter node directly, without an intermediate HTTP load balancer.

Creating a configuration file

You can create a custom NGINX configuration file (e.g. example.com.conf) or modify the default NGINX configuration file (default.conf).

When creating a custom configuration file, make sure that NGINX listens to the incoming connections on the free port.

To meet the listed conditions, the contents of the configuration file must be the following:


    server {
      listen 80;
      listen [::]:80 ipv6only=on;

      # the domains for which traffic is processed
      server_name example.com; 
      server_name www.example.com;

      # turn on the monitoring mode of traffic processing
      wallarm_mode monitoring; 
      # wallarm_instance 1;

      location / {
        # setting the address for request forwarding
        proxy_pass http://10.80.0.5; 
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      }
    }

8. Tune the Memory Allocation Policy for the Filter Node

The filter node uses Tarantool to store data in memory. By default the amount of RAM allocated to the Tarantool is set to 75% of the total virtual machine memory.

You could change this value, if needed. To do so, perform the following steps:

  1. Open the configuration file /etc/default/wallarm-tarantool:

    # nano /etc/default/wallarm-tarantool
    
  2. Set the amount of allocated RAM in GB with SLAB_ALLOC_ARENA parameter.

    For example, if it is necessary to provide 24 GB of memory to the Tarantool, the parameter should be set like:

    SLAB_ALLOC_ARENA=24
    
  3. Save your changes and exit from the editor.

  4. Restart the Tarantool daemon:

    # systemctl restart wallarm-tarantool
    

9. Restart NGINX

Restart NGINX by running the following command:

# systemctl restart nginx

The Deployment Is Completed

You have completed the deployment process successfully.

Check if the filter node is operating normally and proxying the traffic through itself.

See Checking the filter node operation for more information.

Additional Settings

The filter node may require some additional configuration after installation.

The document below lists a few of the typical setups that you can apply if needed.

To get more information about other available settings, proceed to the “Configuration” section of the Administrator’s Guide.

Configuring the Display of the Client's Real IP

If the filter node is deployed behind a proxy server or load balancer without any additional configuration, the request source address may not be equal to the actual IP address of the client. Instead, it may be equal to one of the IP addresses of the proxy server or the load balancer.

In this case, if you want the filter node to receive the client's IP address as a request source address, you need to perform an additional configuration of the proxy server or the load balancer.

Configuring Extended Logging

You can configure the filter node variables logging using NGINX which will allow much faster filter node diagnostics with the help of the NGINX log file.

Adding Wallarm Scanner Addresses to the Whitelist

The Wallarm scanner checks the resources of your company for vulnerabilities. Scanning is conducted using IP addresses from one of the following lists (depending on the type of Wallarm Cloud you are using):

If you are using the Wallarm scanner, you need to configure the whitelists on your network scope security software (such as firewalls, intrusion detection systems, etc.) to contain Wallarm scanner IP addresses.

Limiting the Single Request Processing Time

Use the wallarm_process_time_limit Wallarm directive to specify the limit of the duration for processing a single request by the filter node.

If processing the request consumes more time than specified in the directive, then the information on the error is entered into the log file and the request is marked as an overlimit_res attack.

Limiting the Server Reply Waiting Time

Use the proxy_read_timeout NGINX directive to specify the timeout for reading the proxy server reply.

If the server sends nothing during this time, the connection is closed.

Limiting the Maximum Request Size

Use the client_max_body_size NGINX directive to specify the limit for the maximum size of the body of the client's request.

If this limit is exceeded, NGINX replies to the client with the 413 (Payload Too Large) code, also known as the Request Entity Too Large message.

results matching ""

    No results matching ""