Rate Limiting API Reference
Let’s Encrypt enforces per-endpoint rate limits via token bucket algorithms with burst capacity, returning 503/429 responses with Retry-After headers. This reference lists endpoint-specific limits, response formats, and how to implement retry logic so your automation respects rate limits.
Rate Limiting API Reference
Section titled “Rate Limiting API Reference”TL;DR: Let’s Encrypt enforces per-endpoint rate limits via token bucket algorithms with burst capacity, returning 503/429 responses with Retry-After headers—/acme/new-order allows 300 req/sec with 200 burst, while /acme/new-nonce permits 20 req/sec with 10 burst.
Need help with ACME? Ask Axel Axelspire AI bot with own augmented memory for all ACME/certbot.
Overview
Section titled “Overview”Let’s Encrypt implements rate limiting at the API endpoint level to protect infrastructure while enabling high-volume certificate automation. This API reference documents endpoint-specific limits, response formats, and integration patterns for production deployments. Operations teams building certificate automation must understand these limits to implement robust retry logic and avoid service disruptions.
The Boulder CA implementation uses token bucket algorithms that provide burst capacity for legitimate traffic patterns. Each endpoint has specific rate limits enforced at the load balancer level, with some limits based on IP address and others on ACME account. Understanding these distinctions helps teams design distributed architectures that maximize throughput while respecting rate constraints.
Production implementations require monitoring rate limit responses, implementing exponential backoff with jitter, and respecting Retry-After headers. Proxy and load balancer configurations affect rate limiting behavior, as all requests from a proxy count against the proxy’s IP address. Enterprise deployments benefit from multi-IP strategies and request distribution patterns.
Let’s Encrypt ACME API Rate Limits
Section titled “Let’s Encrypt ACME API Rate Limits”Overall Request Limits by Endpoint
Section titled “Overall Request Limits by Endpoint”Let’s Encrypt implements per-endpoint request limits enforced at the load balancer level. These limits vary significantly by endpoint:
| Endpoint | Requests per IP (per second) | Burst Capacity |
|---|---|---|
| /acme/new-nonce | 20 | 10 |
| /acme/new-account | 5 | 15 |
| /acme/new-order | 300 | 200 |
| /acme/revoke-cert | 10 | 100 |
| /acme/renewal-info | 1000 | 100 |
| /acme/* (other endpoints) | 250 | 125 |
| /directory | 40 | 40 |
Response when limits exceeded: 503 Service Unavailable with a Retry-After header.
Token Bucket Algorithm
Section titled “Token Bucket Algorithm”Let’s Encrypt uses a token bucket algorithm for rate limiting. This provides flexibility in how you use your allotted requests:
- Make requests in bursts up to the full limit
- Space out requests to avoid the risk of being limited
- Capacity refills gradually over time
# Example: Implementing token bucket awareness in Pythonimport timefrom dataclasses import dataclassfrom threading import Lock
@dataclassclass TokenBucket: capacity: int refill_rate: float # tokens per second tokens: float last_refill: float lock: Lock
def __init__(self, capacity: int, refill_rate: float): self.capacity = capacity self.refill_rate = refill_rate self.tokens = capacity self.last_refill = time.time() self.lock = Lock()
def acquire(self) -> bool: with self.lock: now = time.time() # Refill tokens based on elapsed time elapsed = now - self.last_refill self.tokens = min(self.capacity, self.tokens + elapsed * self.refill_rate) self.last_refill = now
if self.tokens >= 1: self.tokens -= 1 return True return False
# For new-order endpoint: 300 requests/second, burst of 200new_order_bucket = TokenBucket(capacity=200, refill_rate=300)Proxy and Load Balancer Considerations
Section titled “Proxy and Load Balancer Considerations”When using proxies or load balancers, all proxy requests count against the proxy’s IP address rather than the original client IP. This can lead to unexpected rate limiting behavior in enterprise environments.
# Example: Kubernetes ingress configurationapiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: app-ingress annotations: # Preserve original client IP for rate limiting nginx.ingress.kubernetes.io/use-forwarded-headers: "true" nginx.ingress.kubernetes.io/real-ip-header: "X-Forwarded-For"spec: rules: - host: api.example.com http: paths: - path: / pathType: Prefix backend: service: name: api-service port: number: 80Rate Limiting Detection and Response
Section titled “Rate Limiting Detection and Response”Retry-After Header
Section titled “Retry-After Header”Let’s Encrypt provides a Retry-After header in all rate limit error responses, indicating the duration your client should wait before retrying. Always parse and respect this header.
import timeimport requestsfrom requests.adapters import HTTPAdapterfrom urllib3.util.retry import Retryfrom datetime import datetime
class RateLimitAwareClient: def __init__(self, base_url): self.session = requests.Session() retry_strategy = Retry( total=3, backoff_factor=2, status_forcelist=[500, 502, 504], # Note: Don't auto-retry 429 respect_retry_after_header=True, ) adapter = HTTPAdapter(max_retries=retry_strategy) self.session.mount("http://", adapter) self.session.mount("https://", adapter) self.base_url = base_url
def make_request(self, endpoint, **kwargs): url = f"{self.base_url}/{endpoint}" response = self.session.get(url, **kwargs)
# Handle rate limiting with Retry-After header if response.status_code == 429: retry_after = response.headers.get('Retry-After') if retry_after: # Retry-After can be seconds or HTTP-date try: wait_seconds = int(retry_after) except ValueError: # Parse as HTTP-date retry_date = datetime.strptime(retry_after, '%a, %d %b %Y %H:%M:%S %Z') wait_seconds = (retry_date - datetime.utcnow()).total_seconds()
print(f"Rate limited. Waiting {wait_seconds} seconds...") time.sleep(max(0, wait_seconds)) return self.make_request(endpoint, **kwargs)
# Handle 503 Service Unavailable (load balancer rate limiting) if response.status_code == 503: retry_after = response.headers.get('Retry-After', '60') print(f"Service unavailable. Waiting {retry_after} seconds...") time.sleep(int(retry_after)) return self.make_request(endpoint, **kwargs)
return responseRate Limit Error Message Format
Section titled “Rate Limit Error Message Format”All Let’s Encrypt rate limit error messages follow a consistent format:
too many new registrations (10) from this IP address in the last 3h0m0s,retry after 1970-01-01 00:18:15 UTC.If your request exceeds multiple limits, the error message returned is for the limit that resets furthest in the future.
Certificate Issuance Rate Limits
Section titled “Certificate Issuance Rate Limits”These limits apply when requesting certificates via the new-order API endpoint:
| Limit | Threshold | Window | Refill Rate |
|---|---|---|---|
| New Orders per Account | 300 | 3 hours | 1 per 36 seconds |
| New Certificates per Registered Domain | 50 | 7 days | 1 per 202 minutes |
| New Certificates per Exact Set of Identifiers | 5 | 7 days | 1 per 34 hours |
# cert-manager ClusterIssuer with rate limiting awarenessapiVersion: cert-manager.io/v1kind: ClusterIssuermetadata: name: letsencrypt-prodspec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: admin@example.com privateKeySecretRef: name: letsencrypt-prod solvers: - http01: ingress: class: nginxImportant: Rate limits are commonly encountered during cert-manager upgrade testing and other automated certificate management operations. Plan for these scenarios in your deployment pipelines by using staging environment first.
Boulder Rate Limiting Implementation
Section titled “Boulder Rate Limiting Implementation”Understanding Boulder Architecture
Section titled “Understanding Boulder Architecture”The Boulder rate-limit implementation is the underlying system powering Let’s Encrypt’s rate limiting. It uses the token bucket algorithm with endpoint-specific configurations.
// Example: Go client with Boulder-aware rate limitingpackage main
import ( "context" "time" "golang.org/x/time/rate" "golang.org/x/crypto/acme")
type BoulderAwareClient struct { client *acme.Client // Endpoint-specific rate limiters newOrderLimiter *rate.Limiter newNonceLimiter *rate.Limiter}
func NewBoulderAwareClient(directoryURL string) *BoulderAwareClient { client := &acme.Client{ DirectoryURL: directoryURL, }
return &BoulderAwareClient{ client: client, // Match Let's Encrypt's actual limits newOrderLimiter: rate.NewLimiter(300, 200), // 300/sec, burst 200 newNonceLimiter: rate.NewLimiter(20, 10), // 20/sec, burst 10 }}
func (b *BoulderAwareClient) CreateOrder(ctx context.Context, identifiers []acme.AuthzID) (*acme.Order, error) { // Wait for rate limiter if err := b.newOrderLimiter.Wait(ctx); err != nil { return nil, err } return b.client.AuthorizeOrder(ctx, identifiers)}Enterprise Implementation Patterns
Section titled “Enterprise Implementation Patterns”Multi-IP Rate Limiting Strategy
Section titled “Multi-IP Rate Limiting Strategy”For high-volume certificate operations, implement IP rotation and request distribution:
# Docker Compose example for distributed certificate requestsversion: '3.8'services: cert-manager-1: image: certbot/certbot network_mode: "bridge" command: > sh -c "sleep $$(shuf -i 0-30 -n 1) && certbot certonly --standalone -d app1.example.com --server https://acme-v02.api.letsencrypt.org/directory"
cert-manager-2: image: certbot/certbot network_mode: "bridge" command: > sh -c "sleep $$(shuf -i 31-60 -n 1) && certbot certonly --standalone -d app2.example.com --server https://acme-v02.api.letsencrypt.org/directory"Monitoring and Alerting
Section titled “Monitoring and Alerting”Implement monitoring for rate limit scenarios based on actual Let’s Encrypt limits:
# Prometheus alerting rule for rate limitinggroups:- name: rate_limiting rules: - alert: ACMENewOrdersApproachingLimit expr: | increase(acme_client_new_orders_total[3h]) > 250 for: 5m labels: severity: warning annotations: summary: "Approaching new orders per account limit" description: "New orders in last 3 hours: {{ $value }}/300"
- alert: ACMECertsPerDomainApproachingLimit expr: | increase(acme_client_certs_issued{domain!=""}[7d]) > 40 for: 5m labels: severity: warning annotations: summary: "Approaching certificates per domain limit" description: "Certificates issued for {{ $labels.domain }} in last 7 days: {{ $value }}/50"
- alert: ACMERateLimitExceeded expr: | increase(acme_client_errors_total{error_type="rate_limit"}[5m]) > 0 for: 1m labels: severity: critical annotations: summary: "ACME rate limit exceeded" description: "Rate limit errors detected in ACME client"ACME Renewal Info (ARI) Integration
Section titled “ACME Renewal Info (ARI) Integration”Renewals coordinated by ARI are exempt from all rate limits. Implement ARI support for optimal renewal handling:
import requestsfrom datetime import datetime, timedeltafrom cryptography import x509from cryptography.hazmat.primitives import hashesimport base64
def get_ari_renewal_info(cert_pem: bytes, acme_directory: str) -> dict: """ Query ACME Renewal Info endpoint to determine optimal renewal time. ARI renewals are exempt from all rate limits. """ # Parse certificate cert = x509.load_pem_x509_certificate(cert_pem)
# Calculate AKI.serial identifier aki = cert.extensions.get_extension_for_class(x509.AuthorityKeyIdentifier) aki_bytes = aki.value.key_identifier serial_bytes = cert.serial_number.to_bytes((cert.serial_number.bit_length() + 7) // 8, 'big')
# Base64url encode cert_id = base64.urlsafe_b64encode(aki_bytes).rstrip(b'=').decode() cert_id += '.' + base64.urlsafe_b64encode(serial_bytes).rstrip(b'=').decode()
# Query renewal-info endpoint directory = requests.get(acme_directory).json() renewal_info_url = directory.get('renewalInfo')
if renewal_info_url: response = requests.get(f"{renewal_info_url}/{cert_id}") if response.status_code == 200: return response.json()
return NoneImportant Notes
Section titled “Important Notes”- Revoking certificates does NOT reset rate limits - the resources used to issue those certificates have already been consumed
- A single certificate can include up to 100 identifiers (DNS names or IP addresses)
- For performance, use fewer identifiers per certificate when possible
- Use crt.sh or Censys to check certificates issued for your domain via Certificate Transparency logs
Related Documentation
Section titled “Related Documentation”- Rate Limiting Overview - Core concepts and quick reference
- Rate Limiting Commands - Command-line usage and monitoring
- Rate Limiting Troubleshooting - Error resolution and recovery
- Certificate Lifecycle Management - Automated renewal strategies
- ACME Protocol Standards - RFC 8555 ACME specification
- Certbot Installation - Installing and configuring Certbot