Backstage · Rate Limits

Backstage Rate Limits

Backstage is a self-hosted open-source application; there is no central provider-published rate limit. Rate limits in a Backstage deployment derive from (a) the operator's own ingress / API gateway, (b) the underlying integrations (GitHub, GitLab, Bitbucket, Jenkins, Kubernetes, etc.) which Backstage proxies, and (c) the database / queue backends. Commercial distributions (Spotify Portal, Roadie, RHDH) may publish their own SaaS-tenant limits.

6 Limits Throttle: 429
Developer PortalOpen SourceRate Limiting

Limits

Operator-configured ingress rate limit deployment
varies
see operator's reverse proxy / API gateway configuration
GitHub integration throttling integration
varies
see GitHub REST/GraphQL rate limits (5,000 req/hour authenticated, 15,000 GraphQL points/hour)
GitLab integration throttling integration
varies
see GitLab.com or self-hosted GitLab rate limit settings
Bitbucket integration throttling integration
varies
see Bitbucket API rate limits (1,000 req/hour anonymous; higher authenticated)
Kubernetes API throttling integration
varies
see kube-apiserver QPS / Burst configuration on target cluster
TechDocs build queue deployment
varies
operator-configured

Policies

Configure backend rate limiting
Production Backstage deployments should add rate limiting at the reverse proxy / API gateway tier (e.g., NGINX limit_req, Envoy local_rate_limit) to protect the backend.
Cache integration calls
Backstage caches catalog data in its database; tune refresh intervals to avoid hammering upstream GitHub/GitLab/Jenkins APIs.
Honor upstream Retry-After
Plugins that proxy upstream APIs should propagate Retry-After to consumers and back off when 429 is returned.
Use authenticated tokens
Always integrate with GitHub/GitLab using authenticated tokens (PAT or GitHub App) to get the higher rate-limit allotment.
Distribute Scaffolder workers
For high template-execution volume, deploy multiple Scaffolder workers behind a queue to avoid synchronous bottlenecks.

Sources