For big projects, it’s usually best to use tools like Cloudflare Rate Limiting or HAProxy. These are powerful, reliable, and take care of the heavy lifting for you.
But for smaller projects—or if you want to learn how things work—you can create your own rate limiter right in your code. Why?
By the end of this guide, you’ll know how to build a basic throttler in TypeScript to protect your APIs from being overwhelmed. Here’s what we’ll cover:
This guide is designed to be a practical starting point, perfect for developers who want to learn the basics without unnecessary complexity. But it is not production-ready.
Before starting, I want to give the right credits to Lucia's Rate Limiting section.
Let’s define the Throttler class:
export class Throttler { private storage = new Map(); constructor(private timeoutSeconds: number[]) {} }
The Throttler constructor accepts a list of timeout durations (timeoutSeconds). Each time a user is blocked, the duration increases progressively based on this list. Eventually, when the final timeout is reached, you could even trigger a callback to permanently ban the user’s IP—though that’s beyond the scope of this guide.
Here’s an example of creating a throttler instance that blocks users for increasing intervals:
const throttler = new Throttler([1, 2, 4, 8, 16]);
This instance will block users the first time for one second. The second time for two, and so on.
We use a Map to store IP addresses and their corresponding data. A Map is ideal because it handles frequent additions and deletions efficiently.
Pro Tip: Use a Map for dynamic data that changes frequently. For static, unchanging data, an object is better. (Rabbit hole 1)
When your endpoint receives a request, it extracts the user's IP address and consults the Throttler to determine whether the request should be allowed.
Case A: New or Inactive User
If the IP is not found in the Throttler, it’s either the user’s first request or they’ve been inactive long enough. In this case:
Case B: Active User
If the IP is found, it means the user has made previous requests. Here:
In this latter case, we need to check if enough time is passed since last block. We know which of the timeoutSeconds we should refer thank to an index. If not, simply bounce back. Otherwise update the timestamp.
export class Throttler { // ... public consume(key: string): boolean { const counter = this.storage.get(key) ?? null; const now = Date.now(); // Case A if (counter === null) { // At next request, will be found. // The index 0 of [1, 2, 4, 8, 16] returns 1. // That's the amount of seconds it will have to wait. this.storage.set(key, { index: 0, updatedAt: now }); return true; // allowed } // Case B const timeoutMs = this.timeoutSeconds[counter.index] * 1000; const allowed = now - counter.updatedAt >= timeoutMs; if (!allowed) { return false; // denied } // Allow the call, but increment timeout for following requests. counter.updatedAt = now; counter.index = Math.min(counter.index 1, this.timeoutSeconds.length - 1); this.storage.set(key, counter); return true; // allowed } }
When updating the index, it caps to the last index of timeoutSeconds. Without it, counter.index 1 would overflow it and next this.timeoutSeconds[counter.index] access would result in a runtime error.
This example shows how to use the Throttler to limit how often a user can call your API. If the user makes too many requests, they’ll get an error instead of running the main logic.
const throttler = new Throttler([1, 2, 4, 8, 16, 30, 60, 300]); export async function GET({ getClientAddress }) { const IP = getClientAddress(); if (!throttler.consume(IP)) { throw error(429, { message: 'Too Many Requests' }); } // Read from DB, call OpenAI - do the thing. return new Response(null, { status: 200 }); }
When using rate limiting with login systems, you might face this issue:
To prevent this, use the user’s unique userID instead of their IP for rate limiting. Also, you must reset the throttler state after a successful login to avoid unnecessary blocks.
Add a reset method to the Throttler class:
export class Throttler { // ... public reset(key: string): void { this.storage.delete(key); } }
And use it after a successful login:
const user = db.get(email); if (!throttler.consume(user.ID)) { throw error(429); } const validPassword = verifyPassword(user.password, providedPassword); if (!validPassword) { throw error(401); } throttler.reset(user.id); // Clear throttling for the user
As your throttler tracks IPs and rate limits, it's important to think about how and when to remove IP records that are no longer needed. Without a cleanup mechanism, your throttler will continue to store records in memory, potentially leading to performance issues over time as the data grows.
To prevent this, you can implement a cleanup function that periodically removes old records after a certain period of inactivity. Here's an example of how to add a simple cleanup method to remove stale entries from the throttler.
export class Throttler { // ... public cleanup(): void { const now = Date.now() // Capture the keys first to avoid issues during iteration (we use .delete) const keys = Array.from(this.storage.keys()) for (const key of keys) { const counter = this.storage.get(key) if (!counter) { // Skip if the counter is already deleted (handles concurrency issues) return } // If the IP is at the first timeout, remove it from storage if (counter.index == 0) { this.storage.delete(key) continue } // Otherwise, reduce the timeout index and update the timestamp counter.index -= 1 counter.updatedAt = now this.storage.set(key, counter) } } }
A very simple way (but probably not the best) way to schedule the cleanup is with setInterval:
const throttler = new Throttler([1, 2, 4, 8, 16, 30, 60, 300]) const oneMinute = 60_000 setInterval(() => throttler.cleanup(), oneMinute)
This cleanup mechanism helps ensure that your throttler doesn't hold onto old records indefinitely, keeping your application efficient. While this approach is simple and easy to implement, it may need further refinement for more complex use cases (e.g., using more advanced scheduling or handling high concurrency).
With periodic cleanup, you prevent memory bloat and ensure that users who haven’t attempted to make requests in a while are no longer tracked - this is a first step toward making your rate-limiting system both scalable and resource-efficient.
If you’re feeling adventurous, you may be interested into reading how properties are allocared and how it changes. Also, why not, about VMs optmizations like inline caches, which is particularly favored by monomorphism. Enjoy. ↩
Disclaimer: All resources provided are partly from the Internet. If there is any infringement of your copyright or other rights and interests, please explain the detailed reasons and provide proof of copyright or rights and interests and then send it to the email: [email protected] We will handle it for you as soon as possible.
Copyright© 2022 湘ICP备2022001581号-3