Dataiku · Rate Limits

Dataiku Rate Limits

Dataiku DSS is typically self-hosted (or hosted in a customer's cloud tenant), so there are no global vendor-imposed rate limits on the Public API or Internal API. Throughput is bounded by the DSS backend node CPU/memory, the API Node deployment for real-time scoring, and per-API-service quotas defined by the operator. The Cloud Trial exposes a fixed compute envelope (4 CPUs, 32 GB) which effectively caps trial throughput.

3 Limits Throttle: 429
AnalyticsArtificial IntelligenceData PlatformMachine LearningRate Limiting

Limits

Cloud Trial compute envelope trial-tenant
varies
4 CPUs and 32 GB elastic compute
Trial throughput is capped by allocated CPU/memory rather than a request-per-second ceiling.
DSS backend capacity deployment
requests_per_second
operator-defined — bounded by backend node sizing
Self-hosted; the Public and Internal APIs throughput depends on the DSS node sizing.
API Node service quotas per-service
requests_per_second
operator-defined per API Service
Real-time scoring services on the API Node can be configured with per-service rate limits and concurrency caps via the Administration API.

Policies

API Node Sizing
For real-time scoring, scale API Node replicas horizontally and use a load balancer; per-service concurrency is controlled in the API Service deployment configuration.
Backoff Strategy
Clients should retry on 429/503 with exponential backoff and jitter.
Public API vs Internal API
The Public API is the supported integration surface for Projects, Datasets, and Workflows. The Internal API is undocumented and may change without notice; use sparingly.
Bulk Operations
Prefer bulk endpoints (multi-row scoring, bulk dataset write) over per-row calls to reduce request volume.

Sources