Skip to content

Deploying as an Amazon Machine Image (AMI)

To deploy an Amazon Machine Image with a filtering node, perform the following steps:

  1. Log in to your Amazon Web Services account.

  2. Create a pair of SSH keys.

  3. Create a security group.

  4. Launch a filtering node instance.

  5. Connect to the filtering node instance via SSH.

  6. Connect the filtering node to the Wallarm Cloud.

  7. Set up the filtering node for using a proxy server.

  8. Set up filtering and proxying rules.

  9. Allocate instance memory for the Wallarm node.

  10. Configure logging.

  11. Restart NGINX.

1. Log in to your Amazon Web Services account

Log in to aws.amazon.com.

2. Create a pair of SSH keys

During the deploying process, you will need to connect to the virtual machine via SSH. Amazon EC2 allows creating a named pair of public and private SSH keys that can be used to connect to the instance.

To create a key pair, do the following:

  1. Navigate to the Key pairs tab on the Amazon EC2 dashboard.

  2. Click the Create Key Pair button.

  3. Enter a key pair name and click the Create button.

A private SSH key in PEM format will automatically start to download. Save the key to connect to the created instance in the future.

Creating SSH keys

To see detailed information about creating SSH keys, proceed to this link.

3. Create a Security Group

A Security Group defines allowed and forbidden incoming and outgoing connections for virtual machines. The final list of connections depends on the protected application (e.g., allowing all of the incoming connections to the TCP/80 and TCP/443 ports).

Rules for outgoing connections from the security group

When creating a security group, all of the outgoing connections are allowed by default. If you restrict outgoing connections from the filtering node, make sure that it is granted access to a Wallarm API server. The choice of a Wallarm API server depends on the Wallarm Cloud you are using:

  • If you are using the EU Cloud, your node needs to be granted access to api.wallarm.com:444.
  • If you are using the US Cloud, your node needs to be granted access to us1.api.wallarm.com:444.

The filtering node requires access to a Wallarm API server for proper operation.

Create a security group for the filtering node. To do this, proceed with the following steps:

  1. Navigate to the Security Groups tab on the Amazon EC2 dashboard and click the Create Security Group button.

  2. Enter a security group name and an optional description into the dialog window that appears.

  3. Select the required VPC.

  4. Configure incoming and outgoing connections rules on the Inbound and Outbound tabs.

  5. Click the Create button to create the security group.

Creating a security group

To see detailed information about creating a security group, proceed to this link.

4. Launch a filtering node instance

If you deploy several Wallarm nodes

All Wallarm nodes deployed to your environment must be of the same versions. The postanalytics modules installed on separated servers must be of the same versions too.

Before installation of the additional node, please ensure its version matches the version of already deployed modules. If the deployed module version is deprecated or will be deprecated soon (4.0 or lower), upgrade all modules to the latest version.

To check the launched version, connect to the running instance and execute the following command:

apt list wallarm-node

To launch an instance with the filtering node, proceed to this link and subscribe to the filtering node 3.6.

When creating an instance, you need to specify the previously created security group. To do this, perform the following actions:

  1. While working with the Launch Instance Wizard, proceed to the 6. Configure Security Group instance launch step by clicking the corresponding tab.

  2. Choose the Select an existing security group option in the Assign a security group setting.

  3. Select the security group from the list that appears.

After specifying all of the required instance settings, click the Review and Launch button, make sure that instance is configured correctly, and click the Launch button.

In the window that appears, specify the previously created key pair by performing the following actions:

  1. In the first drop-down list, select the Choose an existing key pair option.

  2. In the second drop-down list, select the name of the key pair.

  3. Make sure you have access to the private key in PEM format from the key pair you specified in the second drop-down list and tick the checkbox to confirm this.

  4. Click the Launch Instances button.

The instance will launch with the preinstalled filtering node.

To see detailed information about launching instances in AWS, proceed to this link.

5. Connect to the filtering node instance via SSH

You need to use the admin username to connect to the instance.

Using the key to connect via SSH

Use the private key in PEM format that you created earlier to connect to the instance via SSH. This must be the private key from the SSH key pair that you specified when creating an instance.

To see detailed information about ways to connect to an instance, proceed to this link.

6. Connect the filtering node to the Wallarm Cloud

The filtering node interacts with the Wallarm Cloud. There are two ways of connecting the node to the Cloud:

Required access rights

Make sure that your Wallarm account has the Administrator or Deploy role enabled and two-factor authentication disabled, therefore allowing you to connect a filtering node to the Cloud.

You can check the aforementioned parameters by navigating to the user account list in Wallarm Console.

Connecting using the filtering node token

To connect the node to the Cloud using the token, proceed with the following steps:

  1. Create a new node in the Nodes section of Wallarm Console.

    1. Click the Create new node button.
    2. Create the Wallarm node.
  2. Copy the node token.

  3. On the virtual machine run the addcloudnode script:

    Info

    You have to pick which script to run depending on the Cloud you are using.

    sudo /usr/share/wallarm-common/addcloudnode -H us1.api.wallarm.com
    
    sudo /usr/share/wallarm-common/addcloudnode
    
  4. Paste the filtering node token from your clipboard.

Your node will now synchronize with the Cloud every 2‑4 minutes according to the default synchronization configuration.

Filtering node and Cloud synchronization configuration

After running the addcloudnode script, the /etc/wallarm/syncnode file containing the filtering node and Cloud synchronization settings will be created. The settings of the filtering node and Cloud synchronization can be changed via the /etc/wallarm/syncnode file.

More details on the filtering node and Wallarm Cloud synchronization configuration →

Connecting using your email and password

To connect the node to the Wallarm Cloud using your account requisites, proceed with the following steps:

  1. On the virtual machine run the addnode script:

    Info

    You have to pick which script to run depending on the Cloud you are using.

    sudo /usr/share/wallarm-common/addnode -H us1.api.wallarm.com
    
    sudo /usr/share/wallarm-common/addnode
    
  2. Provide your Wallarm account email and password when prompted.

API access

The API choice for your filtering node depends on the Cloud you are using. Please, select the API accordingly:

Ensure the access is not blocked by a firewall.

Your node will now synchronize with the Cloud every 2‑4 minutes according to the default synchronization configuration.

Filtering node and Cloud synchronization configuration

After running the addnode script, the /etc/wallarm/node.yaml file containing the filtering node and Cloud synchronization settings and other settings required for a correct Wallarm node operation will be created. The settings of the filtering node and Cloud synchronization can be changed via the /etc/wallarm/node.yaml file and system environment variables.

More details on the filtering node and Wallarm Cloud synchronization configuration →

7. Set up the filtering node for using a proxy server

Info

This setup step is intended for users who use their own proxy server for the operation of the protected web applications.

If you do not use a proxy server, skip this step of the setup.

You need to assign new values to the environment variables, which define the proxy server used, to configure Wallarm node for using your proxy server.

Add new values of the environment variables to the /etc/environment file:

  • Add https_proxy to define a proxy for the https protocol.

  • Add http_proxy to define a proxy for the http protocol.

  • Add no_proxy to define the list of the resources proxy should not be used for.

Assign the <scheme>://<proxy_user>:<proxy_pass>@<host>:<port> string values to the https_proxy and http_proxy variables.

  • <scheme> defines the protocol used. It should match the protocol that the current environment variable sets up proxy for.

  • <proxy_user> defines the username for proxy authorization.

  • <proxy_pass> defines the password for proxy authorization.

  • <host> defines a host of the proxy server.

  • <port> defines a port of the proxy server.

Assign a "<res_1>, <res_2>, <res_3>, <res_4>, ..." array value, where <res_1>, <res_2>, <res_3>, and <res_4> are the IP addresses and/or domains, to the no_proxy variable to define a list of the resources which proxy should not be used for. This array should consist of IP addresses and/or domains.

Resources that need to be addressed without a proxy

Add the following IP addresses and domain to the list of the resources that have to be addressed without a proxy for the system to operate correctly: 127.0.0.1, 127.0.0.8, 127.0.0.9, and localhost.
The 127.0.0.8 and 127.0.0.9 IP addresses are used for the operation of the Wallarm filtering node.

The example of the correct /etc/environment file contents below demonstrates the following configuration:

  • HTTPS and HTTP requests are proxied to the 1.2.3.4 host with the 1234 port, using the admin username and the 01234 password for authorization on the proxy server.

  • Proxying is disabled for the requests sent to 127.0.0.1, 127.0.0.8, 127.0.0.9, and localhost.

https_proxy=http://admin:01234@1.2.3.4:1234
http_proxy=http://admin:01234@1.2.3.4:1234
no_proxy="127.0.0.1, 127.0.0.8, 127.0.0.9, localhost"

8. Set up filtering and proxying rules

The following files contain NGINX and filtering node settings:

  • /etc/nginx/nginx.conf defines the configuration of NGINX

  • /etc/nginx/conf.d/wallarm.conf defines the global configuration of Wallarm filtering node

  • /etc/nginx/conf.d/wallarm-status.conf defines the filtering node monitoring service configuration

You can create your own configuration files to define the operation of NGINX and Wallarm. It is recommended to create a separate configuration file with the server block for each group of the domains that should be processed in the same way.

To see detailed information about working with NGINX configuration files, proceed to the official NGINX documentation.

Wallarm directives define the operation logic of the Wallarm filtering node. To see the list of Wallarm directives available, proceed to the Wallarm configuration options page.

Configuration file example

Let us suppose that you need to configure the server to work in the following conditions:

  • Only HTTP traffic is processed. There are no HTTPS requests processed.

  • The following domains receive the requests: example.com and www.example.com.

  • All requests must be passed to the server 10.80.0.5.

  • All incoming requests are considered less than 1MB in size (default setting).

  • The processing of a request takes no more than 60 seconds (default setting).

  • Wallarm must operate in the monitor mode.

  • Clients access the filtering node directly, without an intermediate HTTP load balancer.

Creating a configuration file

You can create a custom NGINX configuration file (e.g. example.com.conf) or modify the default NGINX configuration file (default.conf).

When creating a custom configuration file, make sure that NGINX listens to the incoming connections on the free port.

To meet the listed conditions, the contents of the configuration file must be the following:

    server {
      listen 80;
      listen [::]:80 ipv6only=on;

      # the domains for which traffic is processed
      server_name example.com; 
      server_name www.example.com;

      # turn on the monitoring mode of traffic processing
      wallarm_mode monitoring; 
      # wallarm_application 1;

      location / {
        # setting the address for request forwarding
        proxy_pass http://10.80.0.5; 
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      }
    }

9. Instance memory allocation for the Wallarm node

Filtering node uses the in-memory storage Tarantool.

By default, the amount of RAM allocated to Tarantool is 40% of the total instance memory.

You can change the amount of RAM allocated for Tarantool. To allocate the instance RAM to Tarantool:

  1. Open the Tarantool configuration file:

    sudo vim /etc/default/wallarm-tarantool
    
  2. Set the amount of allocated RAM in the SLAB_ALLOC_ARENA in GB. The value can be an integer or a float (a dot . is a decimal separator).

    Learn more about amount of required resources here. Note that for testing environments you can allocate lower resources than for the production ones.

  3. To apply changes, restart the Tarantool daemon:

    sudo systemctl restart wallarm-tarantool
    

10. Configure logging

Configure the filtering node variables logging using NGINX. This will allow to perform a quick filtering node diagnostics with the help of the NGINX log file.

11. Restart NGINX

Restart NGINX by running the following command:

sudo systemctl restart nginx

The installation is complete

The installation is now complete.

Check that the filtering node runs and filters the traffic. See Check the filtering node operation.

Default settings

A freshly installed filtering node operates in blocking mode (see the wallarm_mode directive description) by default.

This may result in the inoperable Wallarm Scanner. If you plan to use the Scanner, then you need to perform additional actions to render Scanner operational.

Additional settings

The filtering node may require some additional configuration after installation.

The document below lists a few of the typical setups that you can apply if needed.

To get more information about other available settings, proceed to the Configuration section of the Administrator’s guide.

Configuring the display of the client's real IP

If the filtering node is deployed behind a proxy server or load balancer without any additional configuration, the request source address may not be equal to the actual IP address of the client. Instead, it may be equal to one of the IP addresses of the proxy server or the load balancer.

In this case, if you want the filtering node to receive the client's IP address as a request source address, you need to perform an additional configuration of the proxy server or the load balancer.

Limiting the single request processing time

Use the wallarm_process_time_limit Wallarm directive to specify the limit of the duration for processing a single request by the filtering node.

If processing the request consumes more time than specified in the directive, then the information on the error is entered into the log file and the request is marked as an overlimit_res attack.

Limiting the server reply waiting time

Use the proxy_read_timeout NGINX directive to specify the timeout for reading the proxy server reply.

If the server sends nothing during this time, the connection is closed.

Limiting the maximum request size

Use the client_max_body_size NGINX directive to specify the limit for the maximum size of the body of the client's request.

If this limit is exceeded, NGINX replies to the client with the 413 (Payload Too Large) code, also known as the Request Entity Too Large message.