Skip to content

How to Integrate with the Pipedrive API (2026 Engineering Guide)

A technical guide to building reliable Pipedrive API integrations in 2026, covering token-based rate limits, 40-character custom field hashes, OAuth 2.0, and pagination quirks.

Roopendra Talekar Roopendra Talekar · · 11 min read
How to Integrate with the Pipedrive API (2026 Engineering Guide)

If you're building a B2B SaaS product that touches sales data, your customers are going to ask for a Pipedrive integration. Answering that request requires understanding a few architectural realities: Pipedrive has fully migrated to a strict Token-Based Rate Limit (TBRL) system, their custom fields are represented by dynamic 40-character hashes, and their pagination methods are split across API versions.

This guide breaks down exactly how to handle these engineering challenges — authentication, the new token-based rate limits, the custom field mapping problem, and pagination quirks across v1 and v2 endpoints — without letting integration maintenance eat your product roadmap alive.

Why Pipedrive Is a Must-Have Integration in 2026

Pipedrive surpassed 100,000 customers, operating across 170 countries with over 900 employees. It's no longer just a CRM for small startups; it's deeply entrenched in mid-market sales teams. It's the third or fourth CRM your enterprise buyers will ask for, right behind Salesforce and HubSpot.

When enterprise buyers evaluate your software, they look at your integration directory. As we noted in our breakdown of the most requested integrations for B2B sales tools, if your product doesn't plug into the tools sales reps already use, it gets abandoned after onboarding. Reps spend 70% of their week on administrative work and data entry. Every time a user has to switch out of your platform to manually update a Deal stage or log a Call in Pipedrive, you lose product stickiness.

Pipedrive's API has also been undergoing significant changes. A set of API v2 endpoints officially moved out of beta in March 2025, and Pipedrive is encouraging all developers to migrate to v2 to stay within their API token budget. If you're starting a new integration in 2026, you need to build for both v1 and v2 simultaneously — because not every resource has a v2 endpoint yet.

But building a Pipedrive integration is not as simple as throwing a few fetch requests at a REST endpoint. You're building a distributed system that must respect strict provider constraints.

Authentication: API Tokens vs. OAuth 2.0

Pipedrive offers two ways to authenticate: personal API tokens and OAuth 2.0.

API Token OAuth 2.0
Use case Internal scripts, prototyping Multi-tenant SaaS products
Setup effort Copy from Pipedrive settings Register app in Developer Hub, implement auth flow
Token lifetime Doesn't expire (until rotated) Access token expires; refresh token required
Scopes Full access to user's data Fine-grained per-resource scopes
Marketplace listing Not eligible Required

Personal API Tokens are static strings passed as a query parameter (e.g., ?api_token=YOUR_TOKEN). They're fine for internal scripts or single-tenant data exports, but entirely unacceptable for a multi-tenant B2B SaaS product. You can't ask your enterprise customers to copy-paste API keys into your UI. It's a security risk, and if the user who generated the token leaves the company, your integration breaks silently.

OAuth 2.0 is the required path for commercial integrations. Using OAuth 2.0 is necessary for developing apps available in the Pipedrive Marketplace, and it provides fine-grained access via scopes. The authorization endpoint lives at https://oauth.pipedrive.com/oauth/authorize, and you exchange the authorization code for tokens at https://oauth.pipedrive.com/oauth/token.

The token refresh trap

The access token has a limited lifetime. After the period returned in expires_in, it becomes invalid and you must use the refresh_token. In practice, access tokens expire after about 1 hour. This is where production integrations break.

If you wait until a request fails with a 401 Unauthorized to refresh the token, you risk race conditions where multiple concurrent requests attempt to refresh the same token simultaneously, resulting in invalid_grant errors. And if your refresh logic fails silently — say, a network timeout at 3 AM — every API call for that customer's account will return 401s until someone manually re-authenticates.

Your options for handling this reliably:

  1. Proactive refresh — Check the token's expiration timestamp before every API call. If it expires in less than 60 seconds, trigger a refresh. Don't wait for a 401.
  2. Retry on 401 — If a call fails with 401, attempt a refresh and replay the request exactly once.
  3. Alert on failure — If the refresh itself fails, mark the account as needing re-authentication and notify the customer immediately.

For a deeper look at building this at scale across dozens of CRMs, see our guide on architecting a scalable OAuth token management system.

This is the biggest architectural change to the Pipedrive API in years, and it will break integrations that aren't designed for it.

For existing accounts, the new rate limits were gradually rolled out beginning March 1st, 2025, with the process scheduled to complete by December 31st, 2025. If you're reading this in 2026, they apply to every Pipedrive account you integrate with.

How the token budget works

Instead of counting raw HTTP requests (the old model of ~40 requests per 2 seconds), Pipedrive now assigns a "token cost" to each endpoint based on its computational expense.

Each company account gets a daily API token budget shared among all users, which applies exclusively to API traffic authenticated by API tokens or OAuth tokens.

The formula: 30,000 base tokens × subscription plan multiplier × number of seats (plus any purchased top-ups).

Each API endpoint has a specific token cost based on its complexity. Lightweight endpoints consume fewer tokens, while data-intensive endpoints cost more. A simple GET /persons/{id} is cheap, but a broad GET /deals with filters and heavy include parameters could burn through tokens fast.

Burst limits: the 2-second window

Burst rate limits prevent a large number of tokens from being consumed in a short period and apply at the individual user level within each company account, operating on a rolling 2-second window.

This is the part that catches teams off guard. Even if you have plenty of daily tokens left, hammering the API in a tight loop will trigger burst limits and return 429 Too Many Requests. If your application tries to sync 1,000 contacts simultaneously, you'll immediately hit the wall. And if you keep ignoring those 429s, Pipedrive will escalate to a 403 response from their CDN layer, producing an HTML error page.

Architecting for TBRL

You can't rely on naive setTimeout retries. You need a centralized, durable queue that regulates outbound concurrency per integrated account.

flowchart TD
    A[API Request] --> B{Check remaining<br>daily tokens}
    B -->|Sufficient| C{Check burst<br>window}
    B -->|Low| D[Queue and<br>delay request]
    C -->|Clear| E[Execute request]
    C -->|Near limit| F[Add 2s delay]
    F --> E
    E --> G{Response}
    G -->|200 OK| H[Deduct token cost]
    G -->|429| I[Exponential backoff<br>with jitter]
    I --> A
    G -->|403 CDN block| J[Stop all requests<br>Alert ops team]

When you receive a 429, the response headers won't explicitly tell you exactly how many milliseconds to wait before the 2-second window clears. Implement exponential backoff with jitter: start with a 2-second delay, then 4 seconds, then 8 seconds. Add random jitter (e.g., ±500ms) to prevent the "thundering herd" problem where multiple queued jobs wake up at the exact same millisecond and immediately trigger another 429.

Practical advice beyond backoff:

  • Read the rate limit headers on every response. The headers provide information on remaining token counts and reset times.
  • Use webhooks instead of polling. Frequent API polling can quickly deplete the daily token budget and lead to rate limiting. Pipedrive webhooks are free and don't consume tokens.
  • Migrate to v2 endpoints. Newer API v2 endpoints are often more efficient in both response times and token costs.
  • Cache aggressively. A caching layer to store previously retrieved data can significantly reduce API requests and conserve the daily token budget.

For more patterns on rate limit handling across providers, see our guide on best practices for handling API rate limits and retries.

The Custom Field Nightmare: Handling 40-Character Hashes

If rate limits are the biggest architectural hurdle, custom fields are the biggest data mapping headache.

In most CRMs, a custom field has a human-readable internal key. In Salesforce, a "Lead Source" field might be Lead_Source__c. In HubSpot, it might be lead_source.

In Pipedrive, all custom fields are referenced as randomly generated 40-character hashes, for example, dcf558aac1ae4e8c4f849ba5e668430d8df9be12. Not a human-readable slug like annual_revenue. A random hex string.

And here's the real problem: custom fields cannot be duplicated across Pipedrive accounts. Even if two customers create a field called "Annual Revenue" with the same field type, they will have completely different 40-character hashes.

When you fetch a Deal from the Pipedrive API, the payload looks like this:

{
  "success": true,
  "data": {
    "id": 1045,
    "title": "Enterprise Software Contract",
    "value": 50000,
    "currency": "USD",
    "dcf558aac1ae4e8c4f849ba5e668430d8df9be12": "SaaS",
    "8f9b12dcf558aac1ae4e8c4f849ba5e668430d8": "Q3"
  }
}

Notice how the custom data is mixed directly into the root object alongside standard fields like title and value. These 40-character custom fields are not shown in Pipedrive's API Reference since they differ for each account, which means your documentation and your code both need to treat custom fields as entirely dynamic.

The dynamic mapping pattern

You cannot hardcode field keys. You must build a per-account field discovery step into your integration:

// Step 1: Fetch the customer's custom fields for deals
const response = await fetch(
  'https://api.pipedrive.com/api/v2/deals/fields',
  { headers: { 'Authorization': `Bearer ${accessToken}` } }
);
const fields = await response.json();
 
// Step 2: Build a name-to-key lookup
const fieldMap = {};
for (const field of fields.data) {
  fieldMap[field.name.toLowerCase()] = field.key;
}
 
// Step 3: Use the lookup when reading or writing deal data
const annualRevenueKey = fieldMap['annual revenue'];
const deal = await getDeal(dealId);
const annualRevenue = deal.data[annualRevenueKey]; // e.g., deal.data['dcf558aa...']

You need to cache this mapping and refresh it periodically — or when a 422 error suggests a field was deleted or renamed. If a customer renames or removes a custom field in Pipedrive, your cached schema will be stale and API calls will fail silently or throw errors. For a deeper dive into this problem across CRMs, see how to normalize data models across different CRMs.

Pagination: Cursor vs. Offset

Pipedrive is in the middle of a pagination migration, which means you need to handle two different strategies depending on the endpoint.

In API v2, offset-based pagination (start and limit) has been replaced with cursor-based pagination (cursor and limit), which makes iterating over large collections significantly faster. Cursor pagination is more resilient because it prevents data duplication if records are inserted or deleted while you're paginating through the list.

API Version Pagination Type Parameters Next Page Signal
v2 (all list endpoints) Cursor cursor, limit additional_data.next_cursor
v1 (collection endpoints) Cursor cursor, limit additional_data.next_cursor
v1 (standard endpoints) Offset start, limit additional_data.pagination.next_start

For v1 endpoints that haven't migrated yet, offset-based pagination uses start and limit parameters, with the response including a pagination object containing more_items_in_collection and next_start.

// V1 Offset Pagination Response
{
  "success": true,
  "data": [ "..." ],
  "additional_data": {
    "pagination": {
      "start": 0,
      "limit": 100,
      "more_items_in_collection": true,
      "next_start": 100
    }
  }
}
// V2 Cursor Pagination Response
{
  "success": true,
  "data": [ "..." ],
  "additional_data": {
    "next_cursor": "eyJpZCI6MTA0NSwibWFya2VyIjoiMjAyNi0wMS0xNVQxMjozMDowMFoifQ=="
  }
}

The maximum limit value is 500 for both cursor and offset endpoints.

To build a resilient integration, write generic wrapper functions that abstract these differences away from your core business logic. Your application should just ask for "the next page of deals" without caring whether the underlying mechanism is an offset integer or a base64-encoded cursor string. If you're building new, prefer v2 cursor endpoints wherever available — they're faster, more stable with large datasets, and consume fewer tokens under TBRL. For more on handling this mismatch, see normalizing pagination across 50+ APIs.

How Truto Eliminates Pipedrive Integration Overhead

Let's be honest about the trade-offs. If Pipedrive is the only CRM you'll ever integrate with, and you have a dedicated integrations engineer with weeks of bandwidth, building directly against their API is perfectly reasonable. You'll have full control over every edge case.

But most B2B SaaS teams don't stop at one CRM. After Pipedrive, your customers will ask for Salesforce, HubSpot, Dynamics 365, Zoho, and a long tail of niche CRMs. Each one has its own authentication flow, rate limiting strategy, custom field model, and pagination style. Building a Pipedrive integration from scratch means writing thousands of lines of code dedicated purely to infrastructure: token refresh jobs, rate limit queues, custom field hash mappers, and pagination normalizers. This is undifferentiated heavy lifting that doesn't add unique value to your core product.

Truto's Unified CRM API gives you a single endpoint — GET /unified/crm/contacts?integrated_account_id=abc — that works identically across all supported CRMs. For Pipedrive specifically:

  • TBRL handled automatically — Built-in queuing respects both daily token budgets and the 2-second burst window. When Pipedrive returns a 429, the platform absorbs it, applies exponential backoff with jitter, and retries transparently. Your application never sees the 429.
  • Custom field hashes abstracted — The mapping layer discovers each customer's unique field definitions and translates them to a consistent schema. You interact with human-readable keys, and Truto's transformation layer translates them into the required 40-character hashes on the fly.
  • OAuth tokens refreshed proactively — The platform refreshes tokens before expiry — 60 to 180 seconds ahead — so connections never lapse. If a refresh fails, Truto fires a webhook event so your application can prompt the end user to re-authenticate.
  • Pagination normalized — Whether the underlying call uses cursor-based or offset-based pagination, your application gets a consistent next_cursor on every list response across v1 and v2 endpoints.

The entire mapping between Truto's unified schema and Pipedrive's native API is defined as data — JSON configuration and transformation expressions — not code. Adding support for a new Pipedrive endpoint or handling a v1-to-v2 migration doesn't require a code deployment. It's a configuration update.

What to Build Next

If you're starting a Pipedrive integration today, here's the priority order:

  1. Implement OAuth 2.0 with proactive token refresh. Never ship with API tokens in a multi-tenant product.
  2. Build for TBRL from day one. Add rate limit header parsing, exponential backoff, and a request queue. Don't retrofit this later.
  3. Use v2 endpoints wherever available. They're cheaper on tokens and have cursor pagination built in.
  4. Build dynamic custom field discovery. Cache per-account field mappings and refresh them periodically.
  5. Evaluate whether to build or buy. If Pipedrive is one of 5+ CRMs on your roadmap, the engineering cost of maintaining each integration independently is substantial. A unified API can compress months of work into days.

FAQ

How do Pipedrive's new token-based rate limits work?
Each Pipedrive company account gets a daily token budget calculated as 30,000 base tokens × plan multiplier × number of seats. Every API call consumes a specific number of tokens based on endpoint complexity. Burst limits also apply on a rolling 2-second window per user, and exceeding them returns a 429 that can escalate to a 403 CDN block.
Why are Pipedrive custom fields so hard to map?
Pipedrive assigns each custom field a random 40-character hex hash as its API key. These hashes differ across every Pipedrive account, even for identically named fields. You must dynamically discover field mappings per customer using the fields endpoint and cache the results.
Should I use Pipedrive API v1 or v2?
Use v2 endpoints wherever available. They offer cursor-based pagination, stricter input validation, RFC 3339 timestamps, and lower token costs under the new TBRL system. Fall back to v1 only for resources not yet available in v2.
How do I authenticate with the Pipedrive API for a multi-tenant SaaS app?
You must use OAuth 2.0 Authorization Code flow. Register your app in the Pipedrive Developer Hub, implement the redirect-based auth flow, and build proactive token refresh logic since access tokens expire after about one hour. API tokens are only suitable for single-user internal scripts.
What is the maximum pagination limit in the Pipedrive API?
The maximum limit value is 500 items per page for both cursor-based (v2 and v1 collection endpoints) and offset-based (v1 standard endpoints) pagination.

More from our Blog