Respecting Rate Limits

Wait and retry using response headers

When you call the HelloData API, every response includes rate limit headers. To avoid hitting the limiter (or to recover quickly when you do), read the RateLimit header and wait the number of seconds specified by its t parameter before retrying.

Which header to use

We use the IETF RateLimit header format.

  • RateLimit: includes remaining requests (r) and seconds until reset (t)
  • RateLimit-Policy: describes the policy (quota q and window w)

Example RateLimit header:

"default";r=0;t=12

In that example, you should wait 12 seconds before retrying.

Python example

This example retries a request when it receives a 429 Too Many Requests, waiting t seconds from the RateLimit header before retrying.

1import time
2import re
3import requests
4
5
6def parse_ratelimit_t_seconds(rate_limit_header: str | None) -> int | None:
7 """
8 Parses the IETF RateLimit header and returns the `t` value (seconds until reset).
9 Example header: '"default";r=0;t=12'
10 """
11 if not rate_limit_header:
12 return None
13
14 # Match ;t=<int> anywhere in the header value
15 m = re.search(r"(?:^|;)t=(\d+)(?:;|$)", rate_limit_header)
16 if not m:
17 return None
18 return int(m.group(1))
19
20
21def get_with_rate_limit_handling(url: str, headers: dict[str, str], *, max_retries: int = 5):
22 for attempt in range(max_retries + 1):
23 resp = requests.get(url, headers=headers, timeout=30)
24
25 if resp.status_code != 429:
26 return resp
27
28 wait_s = parse_ratelimit_t_seconds(resp.headers.get("RateLimit"))
29 if wait_s is None:
30 # If the header is missing/unparseable, fall back to a small backoff.
31 wait_s = min(2 ** attempt, 30)
32
33 time.sleep(wait_s)
34
35 return resp # last response (429)
36
37
38# Example usage:
39# API_KEY = "..."
40# r = get_with_rate_limit_handling(
41# "https://api.hellodata.ai/v1/property/search",
42# headers={"Authorization": f"Bearer {API_KEY}"},
43# )
44# r.raise_for_status()
45# data = r.json()

TypeScript example

This example does the same thing using fetch (Node 18+).

1function parseRateLimitTSeconds(rateLimitHeader: string | null): number | null {
2 if (!rateLimitHeader) return null;
3
4 // Example header: `"default";r=0;t=12`
5 const match = rateLimitHeader.match(/(?:^|;)t=(\d+)(?:;|$)/);
6 if (!match) return null;
7 return Number(match[1]);
8}
9
10function sleep(ms: number): Promise<void> {
11 return new Promise((resolve) => setTimeout(resolve, ms));
12}
13
14export async function fetchWithRateLimitHandling(
15 url: string,
16 init: RequestInit,
17 { maxRetries = 5 }: { maxRetries?: number } = {}
18): Promise<Response> {
19 let lastResponse: Response | undefined;
20
21 for (let attempt = 0; attempt <= maxRetries; attempt++) {
22 const res = await fetch(url, init);
23 lastResponse = res;
24
25 if (res.status !== 429) {
26 return res;
27 }
28
29 const tSeconds = parseRateLimitTSeconds(res.headers.get("RateLimit"));
30 const waitSeconds = tSeconds ?? Math.min(2 ** attempt, 30);
31
32 await sleep(waitSeconds * 1000);
33 }
34
35 return lastResponse!;
36}
37
38// Example usage:
39// const API_KEY = process.env.HELLODATA_API_KEY!;
40// const res = await fetchWithRateLimitHandling("https://api.hellodata.ai/v1/property/search", {
41// method: "GET",
42// headers: { Authorization: `Bearer ${API_KEY}` },
43// });
44// if (!res.ok) throw new Error(await res.text());
45// const json = await res.json();

Notes and best practices

  • Proactively slow down when r is low: if you’re near r=0, add a small delay between requests.
  • Prefer concurrency limits over retries: cap parallel requests (e.g. with a semaphore / queue) so you don’t burst over the minute window.
  • Always handle missing headers: use a small exponential backoff fallback if RateLimit isn’t present for some reason.