The filter node can be installed as a VMware vApp.
The functionality of the filter node installed as a VMware vApp is completely identical to the functionality of the other deployment options.
The virtualization system must meet the following requirements:
- vApp support
- vCenter version 5.5 or higher.
- ESXi version 5.5 or higher.
- User has access to the vCenter console with the permission to deploy OVF-templates.
- Access to the Wallarm cloud with the permission to create a node.
- Proceed to the link that corresponds to the cloud you are using:
- Click Nodes.
- Click OVF-template for VMware.
The path to the OVF template will be copied to the clipboard. The link will be valid for 24 hours.
The node's access credentials are replaced automatically. If the filter node is still in operation, it will lose the ability to interact with the Wallarm API.
- Go to the vCenter through the vSphere Client or the vSphere Web Service.
- Click File.
- Click Deploy OVF template.
- Paste the link to the OVF template from the clipboard.
- On the Name and Location step, specify the name for the virtual machine to be displayed in the vSphere Client interface.
On the Deployment configuration step, choose configuration:
- up to 10mbps is designed for test installations. It uses one virtual processor and 512 MB of memory.
- 10-50mbps is designed for up to 50 Mbit of traffic. It uses 4 virtual processors and 16 GB of memory.
50-100mbps is designed for up to 100 Mbit of traffic. It uses 8 virtual processors and 32 GB of memory.
On the Properties step, configure the filter node parameters. These parameters are used only for the configuration on the first start.
- Hostname is server name.
- HTTP hosts: a list of domains to be processed. The requests whose Host header value is not included in the list will not be transferred to the backend and will be blocked.
- HTTP backends: a list of IP addresses, to which the requests will be forwarded.
On the first virtual host boot, the server console will prompt to set the root password.
After the first boot, you need to do one of the following:
- Wait for 15 minutes until the filter node-specific files are downloaded.
- Manually run
The filter node may require some additional configuration after installation.
The document below lists a few of the typical setups that you can apply if needed.
To get more information about other available settings, proceed to the “Configuration” section of the Administrator’s Guide.
If the filter node is deployed behind a proxy server or load balancer without any additional configuration, the request source address may not be equal to the actual IP address of the client. Instead, it may be equal to one of the IP addresses of the proxy server or the load balancer.
In this case, if you want the filter node to receive the client's IP address as a request source address, you need to perform an additional configuration of the proxy server or the load balancer.
You can configure the filter node variables logging using NGINX which will allow much faster filter node diagnostics with the help of the NGINX log file.
The Wallarm scanner checks the resources of your company for vulnerabilities. Scanning is conducted using IP addresses from one of the following lists (depending on the type of Wallarm Cloud you are using):
If you are using the Wallarm scanner, you need to configure the whitelists on your network scope security software (such as firewalls, intrusion detection systems, etc.) to contain Wallarm scanner IP addresses.
wallarm_process_time_limit Wallarm directive to specify the limit of the duration for processing a single request by the filter node.
If processing the request consumes more time than specified in the directive, then the information on the error is entered into the log file and the request is marked as an
proxy_read_timeout NGINX directive to specify the timeout for reading the proxy server reply.
If the server sends nothing during this time, the connection is closed.
client_max_body_size NGINX directive to specify the limit for the maximum size of the body of the client's request.
If this limit is exceeded, NGINX replies to the client with the
Payload Too Large) code, also known as the
Request Entity Too Large message.