Mape
Mape
Blog
·8 min read

API Rate Limits, OAuth Tokens, and Other Integration Challenges (Solved)

PvU
Pepijn van Unen

The Five Problems You Will Hit

If you are connecting business systems via APIs, you will encounter the same set of problems regardless of which tools you are integrating. We have hit all of them across dozens of integration projects, and we have settled on reliable solutions for each. This post is the reference we wish we had when we started.

1. OAuth Token Expiry

The Problem

Most modern APIs use OAuth 2.0 for authentication. You get an access token and a refresh token. The access token expires -- sometimes in an hour, sometimes in 10 minutes. Exact Online, which is common in Dutch SMBs, expires access tokens every 10 minutes. If your integration does not handle this correctly, it fails silently. The API returns a 401, your sync stops, and nobody notices until a customer calls asking why their invoice never arrived.

The refresh token has its own expiry, often 30-60 days. Miss that window, and you need to re-authenticate manually through the browser-based OAuth flow. For a system that is supposed to run unattended, this is a serious problem.

The Solution

Proactive refresh. Do not wait for a 401 to refresh your token. Track the token expiry timestamp and refresh 60-90 seconds before it expires. This eliminates the window where your integration is running with an expired token.

if (token.expires_at - current_time) < 90 seconds:
    refresh_token()

Refresh token rotation. Some APIs (including Exact Online) issue a new refresh token every time you use the current one. Store the new refresh token immediately and atomically -- if your storage write fails after the refresh call, you have lost your only valid refresh token and need manual re-authentication.

Alerting on auth failures. Even with proactive refresh, things go wrong. The OAuth provider has an outage. A refresh token gets invalidated server-side. Set up immediate alerts (email, Slack, SMS) when authentication fails, not just logging. The difference between "we noticed in 5 minutes" and "we noticed in 3 days" is significant.

Token storage security. Store tokens encrypted at rest. Never log full token values. Use a secrets manager or encrypted database, not a plain text config file. This is basic security, but we have seen it done wrong more often than right.

2. API Rate Limits

The Problem

Every API has rate limits. Exact Online allows 60 requests per minute per company division. Pipedrive allows 80 requests per 2 seconds on most plans. Shopify allows 40 requests per app per store. If you exceed these, you get a 429 (Too Many Requests) response, and many APIs will temporarily block you if you keep hitting the limit.

The naive approach -- "just add a delay between requests" -- works for simple syncs but falls apart when you have multiple integration processes sharing the same API quota, or when you need to do an initial bulk sync of thousands of records.

The Solution

Delta sync, not full sync. Instead of pulling all records every time, track the last sync timestamp and only request records modified since then. This reduces API calls by 90-99% during normal operation. Most APIs support modified_after or updated_since filters. Budget your rate limit for normal operation and you will rarely hit it.

Centralized rate tracking. If multiple processes call the same API, they need to share a rate limit counter. Use a shared semaphore, a Redis counter, or a simple database timestamp. Without this, Process A and Process B each think they have the full quota and collectively exceed it.

Respect Retry-After headers. When you do get rate limited, the API usually tells you how long to wait via a Retry-After header. Use it. Do not guess.

Exponential backoff with jitter. For retries without a Retry-After header, use exponential backoff: wait 1 second, then 2, then 4, then 8, up to a maximum. Add random jitter (plus or minus 20%) to prevent thundering herd problems where multiple processes retry simultaneously.

Batch endpoints. Many APIs offer batch endpoints that let you create, update, or read multiple records in a single request. One batch request for 50 records uses one rate limit slot instead of 50. Always check the API documentation for batch capabilities before building record-by-record sync.

3. Handling API Downtime

The Problem

APIs go down. Scheduled maintenance, unexpected outages, network issues, DNS failures. If your integration treats a failed API call as a permanent failure, you lose data. If it retries indefinitely without backoff, you flood a recovering service with requests and make things worse.

The Solution

Classify errors. Not all failures are the same:

  • 4xx errors (except 429): Usually your fault. Bad request, missing field, invalid data. Do not retry automatically -- fix the request.
  • 429 errors: Rate limit. Retry after the specified delay.
  • 5xx errors: Server problem. Retry with exponential backoff.
  • Timeouts: Network issue. Retry once immediately, then back off.

Implement a dead letter queue. When a request fails after all retries (we typically cap at 5 attempts over 15 minutes), put it in a dead letter queue for manual inspection rather than dropping it. A dead letter queue is simply a holding area -- a database table or a message queue -- where failed operations wait for human review.

Circuit breaker pattern. If an API returns errors on 5 consecutive calls, stop calling it for a cooldown period (e.g., 5 minutes). This prevents your system from wasting resources hammering a dead endpoint and gives the API time to recover. After the cooldown, send a single test request. If it succeeds, resume normal operation.

Status page monitoring. Most SaaS APIs publish a status page. Monitor them. If Exact Online is reporting an outage, you can proactively pause syncs rather than accumulating thousands of failed requests.

4. Data Format Mismatches

The Problem

System A stores phone numbers as +31612345678. System B stores them as 06-12345678. System C stores them as (06) 1234 5678. They are all the same number, but your deduplication logic thinks they are three different contacts.

This problem extends to dates (ISO 8601 vs. DD/MM/YYYY vs. MM/DD/YYYY), currencies (cents vs. decimal euros), addresses (single field vs. structured fields), and virtually every other data type that has no universal standard.

The Solution

Normalize at the boundary. Define a canonical format for every data type in your integration layer. Phone numbers become E.164 format (+31612345678). Dates become ISO 8601. Amounts become integers in the smallest currency unit (cents). Normalize incoming data as the first step and denormalize outgoing data as the last step.

Map fields explicitly. Never assume fields will match between systems. Build and maintain a field mapping document that specifies: source field, destination field, transformation rules, and what to do when the source field is empty. This document becomes your single source of truth for data transformation.

Validate before writing. After transforming data, validate it against the destination system's requirements before sending the API call. Catching a missing required field or an invalid enum value before the request saves you a failed call and a confusing error.

We cover field mapping strategies and transformation patterns in more detail in our system integration best practices guide.

5. Webhook Reliability

The Problem

Webhooks are the foundation of real-time integrations. System A sends an HTTP POST to your endpoint when something happens. But webhooks are fire-and-forget: the sender makes one attempt (sometimes a few retries) and moves on. If your endpoint is down, overloaded, or returns an error, you miss the event. There is no built-in acknowledgment protocol beyond the HTTP response code.

The Solution

Acknowledge immediately, process later. Your webhook endpoint should return a 200 OK within 1-2 seconds and put the payload in a queue for processing. If you try to process the webhook synchronously -- calling other APIs, transforming data, writing to databases -- you risk timeouts that the sender interprets as failure.

Implement idempotency. Webhooks can fire more than once for the same event. Always check whether you have already processed an event before acting on it. Use the event ID or a hash of the payload as a deduplication key.

Reconciliation jobs. Even with perfect webhook handling, run a periodic reconciliation job that compares records across systems. This catches anything that slipped through: missed webhooks, processing errors, race conditions. We typically run reconciliation every 6-12 hours depending on data sensitivity.

Monitor webhook lag. Track the time between when an event occurs in the source system and when your integration processes it. If this lag increases, you have a queue backup, a processing bottleneck, or a missed webhook accumulating downstream problems.

Deciding Whether to Build

If you are evaluating whether to build API integrations in-house or outsource them, these five problems are the real complexity. The happy path -- reading data from one API and writing to another -- is straightforward. The engineering effort lives in handling the edge cases: the expired token at 3 AM, the rate limit during a bulk import, the webhook that fires twice.

For a deeper look at the build vs. buy decision, including cost comparisons at different scales, see our build vs. buy analysis. And if you are using n8n as your integration platform, many of these patterns can be implemented within its workflow engine rather than from scratch.

The tools and patterns described here are well established. The challenge is implementing all of them consistently across every integration point. Miss one, and it becomes the failure mode that wakes you up on a Saturday morning.

Want results like this?

Book a free 30 minute call. We'll map your processes and tell you honestly which ones are worth automating.

API Rate Limits, OAuth Tokens, and Other Integration Challenges (Solved) | Mape