Containerd · Rate Limits
Containerd Rate Limits
Containerd's gRPC API is exposed locally over a Unix domain socket and (optionally) TCP by the containerd daemon. There is no built-in per-RPC rate limiting on the API surface itself; backpressure comes from gRPC concurrency settings, the runtime's resource ceilings, and the kubelet's CRI flow-control. Operators bound throughput through host resources (CPU, IO, fd / connection limits), the gRPC server's max-concurrent-streams setting, and registry-side image-pull rate limits.
3 Limits
Cloud NativeContainer RuntimegRPCKubernetesRate Limiting
Limits
gRPC Concurrency Bound daemon
configured via containerd config grpc.max_recv_message_size / max_send_message_size and host resource ceilings
Containerd does not impose per-RPC token-bucket limits; concurrency is governed by gRPC server settings and OS-level fd / cgroup limits.
CRI Image-Pull Concurrency daemon
configured via plugins."io.containerd.grpc.v1.cri".registry and max_concurrent_downloads
CRI image pulls are flow-controlled; tune `max_concurrent_downloads` and per-registry mirrors.
Registry-Side Limits external
inherits from each container registry (Docker Hub / GHCR / ECR)
Image pulls inherit the upstream registry's rate limiting (e.g. Docker Hub's anonymous / authenticated pull limits).
Policies
Operator-Owned Limits
Containerd does not impose API rate limits; operators tune gRPC server settings, host resource ceilings, and the kubelet's CRI flow-control.
Backoff Strategy
Clients should respect gRPC RESOURCE_EXHAUSTED / UNAVAILABLE with exponential backoff and jitter — the kubelet does this natively for image pulls and runtime calls.
Authentication & Socket Permissions
Access to the containerd socket is governed by file-system permissions and (optionally) mTLS for TCP exposure; there is no per-token quota.