Skip to content

Wallarm On-Premise Deployment

Wallarm offers an on-premise solution designed for partners, large enterprises, and any organization looking for a comprehensive on-premise security system. This offering allows for the integration of Wallarm's security infrastructure directly into their own environments. The article provides information on how to access and make use of this offering.

Contact for any inquiries

Please contact Wallarm's sales team for any questions or requests regarding on-premise deployment.

The Wallarm architecture is built around two main components:

  • Filtering node: Deployed within your infrastructure, allowing for flexible deployment options to suit your needs.

  • Wallarm Cloud: Traditionally hosted externally by Wallarm. In the on-premise deployment model, we offer a method for deploying Wallarm Cloud within your own infrastructure. This approach necessitates organizing the entire infrastructure due to the comprehensive nature of the service deployment. We simplify this process by providing a script that automatically initiates all required services.

On-premise deployment

Deploying Wallarm Cloud on-premise

For on-premise deployment, you need to deploy the Wallarm Cloud on your infrastructure. Wallarm simplifies this process by providing a script that deploys all required Cloud services in a couple of steps, including both the backend and frontend components (Wallarm Console UI).

Requirements

To deploy Wallarm Cloud on-premise, you need to prepare a compute instance meeting the criteria below.

Operating system

  • Ubuntu LTS 18.04, 20.04, 22.04

  • Debian 11.x, 12.x

  • Red Hat Enterprise Linux 8.x

System requirements

The server should be dedicated as a standalone unit. Allocating dedicated power is advisable. Resource requirements vary based on the expected incoming traffic load.

For less than 1 billion requests per month:

  • 16+ cores

  • 48 GB+ memory

  • 300 GB of SSD root storage (HDDs are inadequate due to their slow performance; NVMe is acceptable but not necessary). Ensure that the server configuration includes only the default operating system mounts to the root directory and, optionally, the boot directory (/boot). Avoid setting up any additional disk volumes or storage partitions.

  • Additional 100 GB of storage for every 100 million requests per month, to accommodate data for 1 year

For more than 1 billion requests per month:

  • 32+ cores

  • 80 GB+ memory (120 GB recommended)

  • 500 GB of SSD root storage (HDDs are inadequate due to their slow performance; NVMe is acceptable but not necessary). Ensure that the server configuration includes only the default operating system mounts to the root directory and, optionally, the boot directory (/boot). Avoid setting up any additional disk volumes or storage partitions.

  • Additional 100 GB of storage for every 100 million requests per month, to accommodate data for 1 year

Network requirements

  • Allowed outgoing connections to https://onprem.wallarm.com with 80 and 443 ports for downloading the license key and the installation/upgrade packages. This domain operates from a static IP address and the DNS must also resolve it.

  • A 3-5 level DNS wildcard record configured for the instance, e.g. *.wallarm.companyname.tld. Ensure that the instance is accessible via these DNS resolutions from any Wallarm filtering node and any client that needs to have this access (probably, you will want to hide it for access only from your VPN, do so then or maybe you will want to have it accessible from any browser and any IP address outside, configure it as you need to)

  • A valid SSL/TLS wildcard certificate (and key) issued from either a trusted or an internal CA. All filtering node instances and browsers must recognize this SSL/TLS certificate/key pair as trusted.

Software dependencies

Begin with a clean operating system installation featuring only essential software. The deployment process will subsequently install any additional packages (including containerd, Kubernetes, etc). Ensure that the following conditions are met:

  • The SSHd service is operational on TCP port 22, with SSH key authentication enabled.

  • The following packages are pre-installed (these are typically included by default in most systems):

    • iproute
    • iptables
    • bash
    • curl
    • ca-certificates
    apt-get install iproute2 iptables bash curl ca-certificates
    
    yum install iproute iptables bash curl ca-certificates
    
  • SELinux is fully disabled, the permissive mode is insufficient due to performance considerations.

  • SWAP memory is disabled.

    swapon -s
    

Procedure

To deploy Wallarm Cloud on-premise on the prepared compute instance:

  1. Contact our sales team to obtain the deployment script for the Cloud services, the corresponding instructions, and the initial credentials.

  2. Prepare a virtual (or physical) machine according to the requirements outlined above.

  3. Upload the installation package to the prepared instance and execute it to deploy the solution components.

  4. Configure a DNS wildcard record to point to the IP address of the prepared instance. For example, if you want to delegate a wildcard DNS record *.wallarm.companyname.tld, ensure that at least my.wallarm.companyname.tld and api.wallarm.companyname.tld are resolved to the IP address of the prepared instance.

  5. Follow the initial configuration guide provided with the installation package.

  6. Once configured, access https://my.wallarm.companyname.tld (or the corresponding domain record you configured) and attempt to log in using the initial credentials provided with the installation package.

You can now configure the Wallarm platform via the on-premise UI just like the hosted Cloud version, e.g.:

All functionalities are outlined on this documentation site. When referencing Wallarm Console UI links in different Clouds from the articles, use your own domain and the interface where you have deployed the on-premise Wallarm Cloud.

Deploying Wallarm filtering node

The deployment process for the on-premise Wallarm filtering node is similar to standard filtering node deployment procedures. Choose a deployment option that suits your needs and infrastructure and follow our guides, considering the specific requirements for your selected deployment method.

Requirements

To deploy a filtering node, prepare a compute instance meeting these criteria:

  • Sufficient CPU, memory, and storage to support node operation, tailored to your traffic volume. Refer to the general resource allocation recommendations provided here.

  • Access to the TCP/80 and TCP/443 ports of the on-premise Cloud instance.

  • Follow any other requirements specified in the deployment method article you choose.

Procedure

To deploy a filtering node on-premise:

  1. Select a deployment option from the available choices and adhere to the provided instructions. All options, including in-line and out-of-band (OOB) configurations, support on-premise deployment.

    During the node setup, in the parameters that define the Wallarm Cloud host, specify api.wallarm.companyname.tld where wallarm.companyname.tld is the domain of the Wallarm Cloud instance you created earlier.

  2. Ensure the domain of the running instance resolves to its IP address. For instance, if the domain is configured as wallarm.node.com, this domain should point to the instance's IP.

Testing the deployment

To test the deployment:

  1. Run the test Path Traversal attack targeting the filtering node instance:

    curl http://localhost/etc/passwd
    
  2. Open the deployed Wallarm Console UI and check that the corresponding attack appeared in the attack list.

Limitations

The following functionalities are currently not supported by the on-premise Wallarm solution: