Skip to content

Wallarm Connector for Apigee API Management

This guide describes how to secure your APIs managed by Apigee API Management (APIM) using the Wallarm connector.

Overview

To use Wallarm as a connector for Apigee APIM, you need to deploy the Wallarm Node externally and apply the Wallarm-provided shared flows in Apigee to route traffic to the Wallarm Node for analysis.

The Wallarm connector for Apigee APIM supports the synchronous and asynchronous traffic analysis:

In synchronous (inline) mode, the policy intercepts requests and sends them to the Wallarm Node for inspection. Based on the Node's filtration mode, malicious requests may be blocked with 403, providing real-time threat mitigation.

Apigee APIM with Wallarm policy, synchronous traffic analysis

In asynchronous (out-of-band) mode, traffic is mirrored to the Node without affecting the original flow. Malicious requests are logged in Wallarm Console but not blocked.

Apigee APIM with Wallarm policy, asynchronous traffic analysis

Use cases

This solution is recommended for securing APIs managed by the Apigee APIM service.

Limitations

  • Custom blocking code configuration is not supported.

    All blocked malicious traffic is returned with status code 403 by default. You can customize the block page content, but not the response code itself.

  • Rate limiting by Wallarm rules is not supported.

    Rate limiting cannot be enforced on the Wallarm side for this connector. If rate limiting is required, use Apigee policies.

  • Multitenancy is not supported.

    All protected APIs are managed under a single Wallarm account; separating protection across multiple accounts for different infrastructures or environments is not yet supported.

Requirements

Before deployment, ensure the following prerequisites are met:

Deployment

This guide shows deployment primarly via the Google Cloud Console and Apigee REST API. For automation, use the Apigee Terraform provider, or refer to the full Apigee API reference.

1. Deploy a Wallarm Node

The Wallarm Node is a core component of the Wallarm platform that you need to deploy. It inspects incoming traffic, detects malicious activities, and can be configured to mitigate threats.

You can deploy it either hosted by Wallarm or in your own infrastructure, depending on the level of control you require.

To deploy a Wallarm-hosted node for the connector, follow the instructions.

Choose an artifact for a self-hosted node deployment and follow the attached instructions:

Required Node version

Please note that the Apigee APIM connector is supported only by the Native Node version 0.18.0+.

2. Obtain the connector code bundle

Contact sales@wallarm.com to get the Apigee connector code bundle.

The bundle contains:

  • sharedflows/ - Wallarm shared flow bundles to deploy in Apigee:

    • Wallarm-Inline-Request-Flow.zip and Wallarm-Inline-Response-Flow.zip for synchronous analysis
    • Wallarm-OOB-Request-Flow.zip and Wallarm-OOB-Response-Flow.zip for asynchronous analysis
  • proxies/ - sample, ready-to-use API proxies you can modify and reuse:

    • wallarm-single-proxy-sync - sample proxy with the Wallarm connector policies preconfigured for synchronous analysis
    • wallarm-single-proxy-async - sample proxy with the Wallarm connector policies preconfigured for asynchronous analysis

    While this guide walks you through deployment from scratch, these samples can serve as a shortcut or reference.

    To run the sample proxies, you must also create the WallarmConfig KVM in the target environment and deploy the corresponding shared flows.

3. Create a key value map in Apigee

Define the WallarmConfig key value map (KVM) to store Wallarm connector configuration. Using a KVM allows you to change parameters without modifying policy code.

  1. Create the WallarmConfig KVM:

    Use the following Apigee API call to create a KVM at the environment level:

    curl -X POST \
      -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      -d '{
        "name": "WallarmConfig",
        "encrypted": true
      }' \
      "https://apigee.googleapis.com/v1/organizations/<APIGEE_ORG_ID>/environments/\
      <APIGEE_ENV>/keyvaluemaps"
    

    <APIGEE_ORG_ID> - the Google Cloud project name, <APIGEE_ENV> - the Apigee environment.

    Alternatively, you can create a KVM at the API proxy or organization level.

    In Google Cloud Console → Management → Environments → your environment → Key value maps, Create the WallarmConfig KVM.

    WallarmConfig KVM for Apigee

    When using the Console, KVMs can only be created at the environment level.

  2. Add entries to the WallarmConfig KVM:

    KVM entry Description Required?
    node_url Full domain name of your Wallarm Node including protocol (e.g., https://wallarm-node-instance.com). Yes
    ignore_errors Defines error-handling behavior in synchronous traffic analysis when the Node is unavailable (e.g., timeouts):
    • true (default) - requests are forwarded to APIs when the Node is not available
    • false - block requests with status code 403 when the Node is not available
    If not specified, the connector defaults to true, meaning requests are always forwarded to APIs when the Node is unavailable.
    No

    Use the following Apigee API call to add entries to the environment-level KVM:

    curl -X POST \
      -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      -d '{
        "name": "node_url",
        "value": "<WALLARM_NODE_URL>"
      }' \
      "https://apigee.googleapis.com/v1/organizations/<APIGEE_ORG_ID>/environments/\
      <APIGEE_ENV>/keyvaluemaps/WallarmConfig/entries"
    

    Alternatively, you can add entries to a KVM at the API proxy or organization level.

    Entries must be created at the same level where the KVM itself was originally created.

    Add entries by creating a KeyValueMapOperations policy inside your API proxy:

    1. In Google Cloud Console → Proxy development → API proxies → select your API proxy → Policies, Add policy with the following XML:

      <KeyValueMapOperations async="false" continueOnError="false" enabled="true" name="SetKVMEntry" mapIdentifier="WallarmConfig">
          <Put>
            <Key>
              <Parameter>node_url</Parameter>
            </Key>
            <Value>WALLARM_NODE_URL</Value>
          </Put>
          <Scope>environment</Scope>
      </KeyValueMapOperations>
      

      Entries in WallarmConfig KVM for Apigee

    2. Attach the policy to Request PreFlow and Response PostFlow of the proxy endpoint:

      KeyValueMapOperations in PreFlow/PostFlow

      Relevant XML snippet for the proxy configuration
      ...
        <PreFlow name="PreFlow">
          <Request>
            <Step>
              <Name>SetKVMEntry</Name>
            </Step>
          </Request>
        </PreFlow>
      
        <PostFlow name="PostFlow">
          <Response>
            <Step>
              <Name>SetKVMEntry</Name>
            </Step>
          </Response>
        </PostFlow>
      ...
      
    3. Save and Deploy a new API proxy revision.

4. Deploy Wallarm shared flows

Each traffic analysis mode (synchronous or asynchronous) requires 2 shared flows: one for requests and one for responses.

  1. In Google Cloud Console → Proxy development → Shared flows, Upload bundle from Wallarm-Inline-Request-Flow.zip for synchronous mode or from Wallarm-OOB-Request-Flow.zip for asynchronous mode.

    Upload Wallarm shared flow bundle in the Google Console UI

  2. Deploy the uploaded flow. Verify that it shows a green "Ok" status for each target environment.

    Deploy Wallarm shared flow bundle in the Google Console UI

  3. In the same section, upload the corresponding response flow archive (Wallarm-Inline-Response-Flow.zip or Wallarm-OOB-Response-Flow.zip).

  4. Deploy the response shared flow.

Alternatively, you can automate this step using the Apigee API.

5. Apply shared flows to your APIs

You can apply the Wallarm shared flows globally to all API proxies in an environment, or attach them only to specific API proxies.

To enable the connector for all proxies in an environment, attach the Wallarm flows as flow hooks:

  1. Proceed to Google Cloud Console → Management → Environments → select your environment → Flow hooks.
  2. Assign the deployed Wallarm flows:

    • Pre-proxy → Wallarm-Sync-Request-Flow for synchronous mode or Wallarm-Async-Request-Flow for asynchronous mode.

      Requests are forwarded (synchronous) or mirrored (asynchronous) to the Wallarm Node for inspection before reaching the API proxy.

    • Post-proxy → Wallarm-Sync-Response-Flow for synchronous mode or Wallarm-Async-Response-Flow for asynchronous mode.

      Responses from the target service are mirrored to the Wallarm Node for inspection.

Flow hooks for environment in Apigee

If you already use pre-proxy or post-proxy flow hooks for other policies, you can include the Wallarm flows by referencing them through a FlowCallout policy.

Alternatively, you can automate this step using the Apigee API.

You can attach the Wallarm shared flows only to specific API proxies using the Flow Callout policies:

  1. Proceed to Google Cloud Console → Proxy development → API proxies → select the API proxy to protect → Policies → Add policy.
  2. Create the request policy:

    • Policy type: Flow Callout
    • Name and Display name: FC-Wallarm-Node-Request
    • Sharedflow: Wallarm-Sync-Request-Flow for synchronous mode or Wallarm-Async-Request-Flow for asynchronous mode
  3. Create the response policy:

    • Policy type: Flow Callout
    • Name and Display name: FC-Wallarm-Node-Response
    • Sharedflow: Wallarm-Sync-Response-Flow for synchronous mode or Wallarm-Async-Response-Flow for asynchronous mode

    Flow callout for requests in Apigee

    The FC-Wallarm-Node-Request.xml and FC-Wallarm-Node-Response.xml policy files are also included in the Wallarm Apigee connector bundle.

  4. Attach the policies to the proxy flows:

    • Request → PreFlow → select FC-Wallarm-Node-Request
    • Response → PostFlow → select FC-Wallarm-Node-Response

    Flow callout for requests in Apigee, attach as preflow

    Relevant XML snippet for PreFlow and PostFlow
    ...
      <PreFlow name="PreFlow">
        <Request>
          <Step>
            <Name>FC-Wallarm-Node-Request</Name>
          </Step>
        </Request>
      </PreFlow>
      <PostFlow name="PostFlow">
        <Response>
          <Step>
            <Name>FC-Wallarm-Node-Response</Name>
          </Step>
        </Response>
      </PostFlow>
    ...
    
  5. Add FC-Wallarm-Node-Response with <AlwaysEnforce>true</AlwaysEnforce> to the default fault rule of your proxy.

    When a proxy returns 4xx/5xx, Apigee skips the PostFlow by default. Adding the policy to the fault rule ensures the response is still sent to the Wallarm Node.

    ...
      <FaultRules/>
      <DefaultFaultRule name="DefaultFaultRule">
        <AlwaysEnforce>true</AlwaysEnforce>
        <Step>
          <Name>FC-Wallarm-Node-Response</Name>
        </Step>
      </DefaultFaultRule>
    ...
    
  6. Save and Deploy a new API proxy revision.

Testing

Test the deployed connector with both legitimate and malicious traffic.

Legitimate traffic

  1. In Google Cloud Console → Proxy development → API proxies → select your API proxy → Debug → Start debug session.

  2. Send a legitimate request to the provided URL.

  3. Review the transactions in the debug session and confirm that the Wallarm flows are triggered:

    Apigee APIM debug legitimate request

    If flows are applied at the environment level, the debug view may differ slightly, but Wallarm-Sync-Request-Flow and Wallarm-Sync-Response-Flow (or their Async counterparts) must still appear.

  4. In Wallarm Console → API Sessions, verify that the legitimate request is displayed:

    Wallarm Console: legitimate request in API Sessions

Malicious traffic

  1. Send a request with a test SQLi attack by adding the query parameter x='+OR+1=1:

    curl "https:<API_URL>/?x='+OR+1=1"
    
    • Synchronous mode with blocking enabled: the request is blocked with 403.
    • Synchronous mode (monitoring): request reaches the API and is logged in Wallarm Console.
    • Asynchronous mode: request reaches the API and is logged in Wallarm Console.
  2. In Wallarm Console → API Sessions, verify that the malicious request is logged:

    Wallarm Console: malicious request in API Sessions

  3. In Wallarm Console → Attacks, confirm that the attack is listed:

    SQLi attacks in the interface (Apigee APIM connector for Wallarm)

Block page customization

If the Node is deployed in synchronous mode with blocking enabled, you can customize the block page returned for blocked malicious requests:

  1. Go to Google Cloud Console → Shared Flows → Wallarm-Sync-Request-Flow → Develop.

  2. Edit the RF-Wallarm-403 RaiseFault policy. This policy defines the error response from Wallarm.

    Update the content inside the <FaultResponse><Set><Payload> tag. Make sure the payload is wrapped in CDATA.

  3. Save and deploy a new flow revision.

Customizing Wallarm block page for Apigee APIM connector

Upgrading the policies

To upgrade the deployed Wallarm policies to a newer version:

  1. Download the updated Apigee connector code bundle from Wallarm.

  2. Import the new versions of the shared flows (Wallarm-Inline-* or Wallarm-OOB-*) into Apigee, as described in Step 4.

  3. Deploy the updated shared flows to the required environments.

  4. Update your API proxies or environment flow hooks to reference the new flow revisions, as described in Step 5.

  5. Test both legitimate and malicious traffic to verify the upgrade.

Policy upgrades may require a Wallarm Node upgrade, especially for major version updates. See the Native Node changelog for the self-hosted Node release notes. Regular node updates are recommended to avoid deprecation and simplify future upgrades.

Uninstalling the policies

To remove the Wallarm connector from Apigee API Management:

  1. In Google Cloud Console → Proxy developmentAPI proxies, open the proxy where the connector is applied.

  2. Remove the FC-Wallarm-Node-Request and FC-Wallarm-Node-Response policies from PreFlow, PostFlow, and the Default Fault Rule.

  3. If deployed at the environment level, remove the Wallarm shared flows from Flow hooks.

  4. Delete the Wallarm shared flows (Wallarm-Sync-* or Wallarm-Async-*) from Shared flows.

  5. Remove the WallarmConfig KVM or its entries if no longer needed.

  6. Save and deploy a new revision of the proxy (or updated environment configuration).