Amazon S3 · Rate Limits

Amazon S3 Rate Limits

Amazon S3 enforces request limits at the partitioned-prefix level inside a bucket - 3,500 PUT/COPY/POST/DELETE per second and 5,500 GET/HEAD per second per prefix - and scales nearly linearly by sharding across prefixes. Bucket-level operations (CreateBucket, DeleteBucket, etc.) follow the AWS API throttling envelope. Use exponential backoff with jitter on SlowDown (503) responses.

4 Limits Throttle: 503 Quota: 429
Rate LimitingObject StorageS3

Limits

PUT/COPY/POST/DELETE per prefix bucket-prefix
requests_per_second · second
3500
Per partitioned prefix; shard across prefixes to scale linearly.
GET/HEAD per prefix bucket-prefix
requests_per_second · second
5500
Per partitioned prefix.
Bucket count per account account
count
10000
Soft limit; raisable via Service Quotas (default ranges 100 to 10,000).
S3 Express One Zone directory bucket bucket
requests_per_second
see S3 Express One Zone docs
Higher per-bucket limits (millions of requests/second) on Express One Zone.

Policies

Backoff on SlowDown
SlowDown (HTTP 503) responses indicate prefix-level throttling. Retry with exponential backoff and jitter; AWS SDKs default to standard retry mode.
Prefix sharding
For very high throughput, distribute keys across prefixes (e.g. random hash prefixes) to multiply throughput.
Multipart upload
For files larger than 100 MB, use multipart upload (parallel parts) to reduce wall-clock time and benefit from prefix-level concurrency.
Quota increases
Bucket-count, multipart upload, and bucket policy limits are soft and raisable via Service Quotas.

Sources