Rook · Rate Limits

Rook Rate Limits

Rook does not operate a hosted API and therefore does not impose project-level rate limits. The Rook CRDs are consumed via the Kubernetes API server in the operator's host cluster; any throttling is whatever the cluster administrator configures on the Kubernetes API server (kube-apiserver flow-control / API priority and fairness, --max-requests-inflight, client-go QPS/burst). The S3-compatible object endpoint that Rook can deploy on top of Ceph (via the RGW) is rate-limited by Ceph RGW configuration, not by Rook itself.

2 Limits Throttle: 429
RookCephKubernetesOpen SourceRate Limiting

Limits

Kubernetes API server (Rook CRDs) cluster
requests_per_second
cluster-administrator configured (kube-apiserver flow control)
Rook CRDs (CephCluster, CephBlockPool, ObjectStore, etc.) are served by the host cluster's kube-apiserver. Throughput follows whatever Priority and Fairness configuration the cluster operator has set.
Ceph RGW (S3 endpoint) ceph_user
requests_per_second
configured per ObjectStore / RGW user
When Rook deploys an Object Store (RGW), per-user and per-bucket throttling is controlled by Ceph RGW configuration (rgw_max_concurrent_requests, user/bucket quotas), not by Rook.

Policies

Kubernetes-Native Throttling
All Rook control-plane traffic flows through the Kubernetes API server; clients should respect kube-apiserver 429 responses with Retry-After and configure client-go QPS and burst appropriately.
Ceph RGW Configuration
Object-storage rate, quota, and throttling policy is set on the Ceph cluster (radosgw config and user quotas) rather than at the Rook layer.
Self-Hosted Operator
Because Rook is self-hosted, there is no project-level SaaS quota or burst contract; operational limits are the operator's responsibility.

Sources