Zero-config start
Works out of the box with built-in limits for OpenAI, Anthropic, Google, Groq, Mistral, and Cohere models. No setup required.
Smart queuing, cost tracking, and budget enforcement for Vercel AI SDK and raw OpenAI/Anthropic clients. Zero required dependencies.
npm install ai-sdk-rate-limiterpnpm add ai-sdk-rate-limiteryarn add ai-sdk-rate-limiter# Deno
deno add jsr:@piyushgupta344/ai-sdk-rate-limiter
# Node.js via npx
npx jsr add @piyushgupta344/ai-sdk-rate-limiterimport { createRateLimiter } from 'ai-sdk-rate-limiter'
import { openai } from '@ai-sdk/openai'
import { generateText } from 'ai'
const limiter = createRateLimiter()
const model = limiter.wrap(openai('gpt-4o'))
const { text } = await generateText({ model, prompt: 'Hello!' })That's it. The limiter automatically applies the built-in rate limits for gpt-4o, queues requests when the limit is reached, retries on 429s, and tracks cost.
→ Full guide