TopHome
<2023-05-12 Fri>tech

Rate limiting in Nginx

Very useful article on configuring rate limiting in Nginx. https://www.nginx.com/blog/rate-limiting-nginx/

There are a number of knobs in there worth being aware of including:

One thing that the article does not cover is the response code "429 Too Many Requests". Apparently, this is the standard header to return from a server that does rate limiting to inform clients to retry later.

Apparently you can use the "limitreqstatus" and "limitconnstatus" directives to control what header is sent back by Nginx.

One final point to note: a rate limiting server can send a "Retry-After" header in the response. This can be a date or a delay in seconds. This is used with 429 responses, but apparently can also be sent with 503. See: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Retry-After

Of course, what you do from the client side is up to you.

Example Nginx configuration to rate limit using 429 headers.

events {
}

http {
    limit_req_zone $binary_remote_addr zone=mylimit:10m rate=40r/s;
    limit_req_status 429;
    limit_conn_status 429;

    server {
        listen 8000;

        location / {
            limit_req zone=mylimit burst=40;

            proxy_pass http://localhost:9000;
        }
    }
}

When you look at something like Envoy, in the context of Service Meshes - things are a bit more complicated. You can rate limit on the receiver: https://istio.io/latest/docs/tasks/policy-enforcement/rate-limit/ or the sender https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/route/v3/route_components.proto#config-route-v3-retrypolicy.