Skip to content

Routing Traffic to the Wallarm Node on AWS

After deploying the Wallarm Node on AWS, you need to configure your AWS infrastructure so that all traffic passes through the Node before reaching your application. This guide covers the most common AWS deployment scenarios: Application Load Balancer (ALB), Network Load Balancer (NLB), Amazon CloudFront, and Amazon API Gateway.

This guide applies to inline deployments where the Wallarm Node acts as a reverse proxy:

If you deployed the Native Node AMI for use with connectors or TCP traffic mirror, traffic routing is configured differently โ€” see the respective connector guide or TCP traffic mirror deployment guide.

Overview

Regardless of which AWS service fronts your traffic, the pattern is always the same:

  1. Route all traffic through the Wallarm Node โ€” the Node must sit in the request path between the internet-facing endpoint and your application.

  2. Restrict direct access to the application โ€” use security groups, network ACLs, or service-level policies to ensure the application only accepts traffic from the Wallarm Node.

  3. Validate โ€” confirm that requests bypassing the Node are rejected.

The sections below cover ALB, NLB, CloudFront, and API Gateway โ€” the most common AWS deployment scenarios. If your infrastructure uses a different entry point (e.g., AWS Global Accelerator, AWS App Mesh, or a third-party reverse proxy on EC2), the same three-step pattern applies: route traffic through the Node, lock down direct access using security groups and network ACLs, and verify.

Application Load Balancer (ALB)

If your application is already behind an ALB, you need to insert the Wallarm Node between the ALB and the application. The ALB forwards traffic to the Wallarm Node, which then proxies it to the application.

Internet โ†’ ALB (existing) โ†’ Wallarm Node (new target group) โ†’ Application
  1. Create a target group for the Wallarm Node (e.g., wallarm-tg) containing the Wallarm Node EC2 instance(s) or Auto Scaling Group.

  2. Update your ALB listener rules to forward traffic to the wallarm-tg target group instead of the application target group.

  3. Configure the Wallarm Node to proxy traffic to the application. Point it to your application's internal DNS, IP, or load balancer endpoint:

    • NGINX Node AMI โ€” use the --proxy-pass flag during setup
    • Docker image on ECS โ€” set the NGINX_BACKEND environment variable
  4. Update the application's security group to only allow inbound traffic from the Wallarm Node. The inbound rules should look like this:

    Protocol Port Source
    TCP 80/443 sg-wallarm-node

    Remove any rules allowing direct internet access

    Ensure there are no inbound rules allowing 0.0.0.0/0 on the application security group. Any such rule would allow traffic to bypass the Wallarm Node. For more details on controlling traffic at each network layer, see SEC05-BP02 โ€“ Control traffic flow within your network layers in the AWS Well-Architected Framework.

Network Load Balancer (NLB)

If your application is behind an NLB, the approach is similar to ALB: insert the Wallarm Node between the NLB and the application. NLBs operate at Layer 4 and forward TCP connections, so your security group rules must account for client IP preservation behavior.

Internet โ†’ NLB (existing) โ†’ Wallarm Node (new target group) โ†’ Application
  1. Create a target group for the Wallarm Node containing the Node instance(s) or Auto Scaling Group. Register it with your existing NLB.

  2. Update NLB listeners to forward traffic to the Wallarm Node target group instead of the application target group.

  3. Configure the Wallarm Node to proxy traffic to the application. Point it to your application's internal DNS, IP, or load balancer endpoint:

    • NGINX Node AMI โ€” use the --proxy-pass flag during setup
    • Docker image on ECS โ€” set the NGINX_BACKEND environment variable
  4. Update the application's security group to only allow inbound traffic from the Wallarm Node. The inbound rules should look like this:

    Protocol Port Source
    TCP 80/443 sg-wallarm-node

    Client IP preservation

    If your NLB has client IP preservation enabled, traffic arrives at the Wallarm Node with the original client IP as the source. In this case, the application security group must allow traffic from the Wallarm Node's private IP range (e.g., the VPC CIDR) rather than a specific security group. For tighter control, place the Wallarm Nodes in a dedicated subnet and restrict the application to that subnet CIDR.

  5. Use network ACLs as an additional layer. Restrict the application subnet's NACL to only accept traffic from the Wallarm Node subnet. The inbound rules should look like this:

    Rule Protocol Port Source
    100 TCP 80/443 10.0.1.0/24 (wallarm)
    * All All 0.0.0.0/0 DENY

Amazon CloudFront

If you use CloudFront as a CDN, update the origin to point to the Wallarm Node instead of directly to the application. The Node then proxies traffic to the application.

Internet โ†’ CloudFront (existing) โ†’ Wallarm Node (new origin) โ†’ Application
  1. In your CloudFront distribution, change the origin domain to point to the Wallarm Node instead of the application. Use the same type of endpoint you already have as an origin:

    • If your current origin is an ALB or NLB โ€” create a Wallarm Node target group in that load balancer (as described in the ALB or NLB sections) and keep the load balancer as the CloudFront origin.
    • If your current origin is a single EC2 instance โ€” replace it with the Wallarm Node's EC2 public DNS as the new origin. For production workloads, consider placing the Node behind an ALB with an Auto Scaling Group for high availability.
  2. Configure CloudFront to pass a secret header to the origin (e.g., X-CloudFront-Secret: <value>). This serves as a shared secret to verify that traffic comes from CloudFront.

    In the CloudFront distribution, under Origins, add a custom header:

    Header name Value
    X-CloudFront-Secret <value>
  3. Configure the Wallarm Node (via NGINX configuration) to reject requests that do not include the correct header:

    # /etc/nginx/conf.d/cloudfront-check.conf
    if ($http_x_cloudfront_secret != "<value>") {
        return 403;
    }
    
  4. Restrict the Wallarm Node's security group to CloudFront IPs only. Use an AWS-managed prefix list for CloudFront IP ranges. The inbound rules should look like this:

    Protocol Port Source
    TCP 80/443 com.amazonaws.global.cloudfront.origin-facing
  5. Lock down the application to only accept traffic from the Wallarm Node, as described in the ALB section above.

Amazon API Gateway

If your application is behind Amazon API Gateway (REST or HTTP APIs), route the API Gateway integration through a VPC Link to the Wallarm Node.

Internet โ†’ API Gateway (existing) โ†’ VPC Link โ†’ Wallarm Node (NLB/ALB) โ†’ Application
  1. Update the API Gateway integration to route through the Wallarm Node. The approach depends on your current setup:

    • If your API Gateway already uses a VPC Link โ€” update the target NLB/ALB target group to point to the Wallarm Node (as described in the ALB or NLB sections).
    • If your API Gateway uses a public endpoint integration โ€” create a VPC Link pointing to an NLB or ALB that fronts the Wallarm Node, then update API Gateway routes to use this VPC Link integration. For production workloads, place the Node behind a load balancer with an Auto Scaling Group for high availability.
  2. Configure the Wallarm Node to proxy traffic to the application. Point it to your application's internal DNS, IP, or load balancer endpoint:

    • NGINX Node AMI โ€” use the --proxy-pass flag during setup
    • Docker image on ECS โ€” set the NGINX_BACKEND environment variable
  3. Update the application's security group to only allow inbound traffic from the Wallarm Node. The inbound rules should look like this:

    Protocol Port Source
    TCP 80/443 sg-wallarm-node
  4. For REST APIs, add a resource policy that restricts invocations to known source VPCs or IP ranges if applicable.

  5. For an additional layer of authentication between API Gateway and the Wallarm Node, configure mutual TLS on the API Gateway.

Verification checklist

After configuring traffic routing, verify that the Node cannot be bypassed:

  • Direct access test: Attempt to access the application directly (by its private/public IP or DNS name, bypassing the load balancer/CDN). Connection should be refused or time out.
  • Bypass header test (CloudFront): Send a request to the Wallarm Node without the shared secret header. The request should be rejected with a 403.
  • Attack detection test: Send a test attack through the full path and verify it appears in Wallarm Console โ†’ Attacks:

    curl -H "Host: <your-domain>" https://<entry-point>/etc/passwd
    
  • Security group audit: Review all security groups associated with the application instances and confirm there are no rules allowing direct internet access (0.0.0.0/0).

  • VPC Flow Logs: Enable VPC Flow Logs on the application subnet to detect any traffic that does not originate from the Wallarm Node.

Additional recommendations

  • Use private subnets for application instances. Placing your application in a private subnet with no internet gateway route ensures that the only path to the application is through the Wallarm Node. For guidance on network segmentation, see Network design in the AWS Security Reference Architecture.

  • Enable AWS Config rules to continuously monitor security group compliance. The managed rule restricted-common-ports can alert you if an application security group is modified to allow direct internet access.

  • Use AWS CloudTrail to audit security group changes and detect unauthorized modifications. For details, see SEC04-BP01 โ€“ Configure service and application logging in the AWS Well-Architected Framework.