The Coda REST API lets external code — Python scripts, Node servers, Zapier, webhooks — read and write any table in your doc. Once you understand the ID system and authentication, the rest follows naturally.
Coda is built around tables. The REST API exposes those tables as standard HTTP endpoints — so any code that can make an HTTP request can read rows, create rows, update cells, and delete records in your Coda doc. This turns Coda into a lightweight, no-ops database that non-developers can manage in the UI while developers sync data to and from it programmatically.
External system fires a webhook → your code calls the Coda API → a new row appears in Coda. Non-technical teammates see results without touching code.
Script reads your production database and pushes records into Coda each night. Coda becomes the reviewable, filterable record your team actually works from.
Create or update thousands of rows from a script. Handle complex transforms, multi-step API calls, and scheduled jobs that Coda's built-in automations can't do alone.
https://coda.io/apis/v1. All responses are JSON. Standard HTTP verbs: GET reads, POST creates, PUT updates, DELETE removes.
All Coda API requests require authentication via a personal API token. Generate one: Coda avatar → Account Settings → API Settings → Generate API token. Copy and store it securely — you won't see the value again after closing the dialog.
Pass the token as a Bearer token in the Authorization header of every request:
// HTTP header required on every API request Authorization: Bearer YOUR_API_KEY_HERE // curl example curl -H "Authorization: Bearer abc123yourtoken" \ https://coda.io/apis/v1/whoami // Python (requests library) import requests headers = {"Authorization": f"Bearer {API_KEY}"} response = requests.get("https://coda.io/apis/v1/whoami", headers=headers)
Your API token grants full read/write access to all docs in your account. Store it in an environment variable (CODA_API_KEY), never in code or version control.
Create separate tokens per integration for easy revocation. If one integration is compromised, revoke only that token without affecting others.
Every API call requires IDs for the doc, the table, and sometimes the column. The Coda API uses opaque string IDs, not names. Here's how to find each one:
// 1. Doc ID — extract from the URL // https://coda.io/d/My-Doc_dABCDEFGhij → Doc ID = "ABCDEFGhij" // The part after "_d" in the URL (before any slash) // 2. List all tables → get table IDs GET https://coda.io/apis/v1/docs/{docId}/tables // Response: { "items": [{ "id": "grid-abc123", "name": "Tasks" }] } // 3. List columns in a table → get column IDs GET https://coda.io/apis/v1/docs/{docId}/tables/{tableId}/columns // Response: [{ "id": "c-xyz789", "name": "Status", "type": "select" }]
To read rows from a table, make a GET request to the rows endpoint. The response contains an array of row objects. Each row has an id, a name (the display column value), and a values object mapping column IDs to their cell values.
// Request GET https://coda.io/apis/v1/docs/docId/tables/tableId/rows // Response structure { "items": [ { "id": "i-row123", "name": "Write API documentation", "values": { "c-colStatus": "In Progress", "c-colDueDate": "2024-01-30", "c-colAssignee": "Alice" } } ], "nextPageToken": "eyJ..." // present if more than 500 rows exist }
Add ?query=columnId:value to filter rows server-side — only matching rows are returned. Add ?useColumnNames=true to reference column names instead of IDs in the query.
// Exact match on a column value (using column ID) GET .../rows?query=c-colStatus:In%20Progress // Use column names instead of IDs GET .../rows?useColumnNames=true&query=Status:Done // Pagination: max 500 rows per request GET .../rows?limit=500&pageToken=eyJ0eXBlIjoicm93...
To create a new row, POST to the rows endpoint with a JSON body. The body contains a rows array — you can create one or multiple rows in a single request. Each row is defined by its cells: an array of column/value pairs keyed by column ID.
// POST https://coda.io/apis/v1/docs/{docId}/tables/{tableId}/rows // Request body — create one row { "rows": [ { "cells": [ { "column": "c-colName", "value": "New task from API" }, { "column": "c-colStatus", "value": "To Do" }, { "column": "c-colDueDate", "value": "2024-02-15" } ] } ] } // Create multiple rows in one request — add more objects to rows[] { "rows": [ { "cells": [{ "column": "c-colName", "value": "Task A" }] }, { "cells": [{ "column": "c-colName", "value": "Task B" }] }, { "cells": [{ "column": "c-colName", "value": "Task C" }] } ] } // Response: 202 Accepted — row creation is asynchronous // Response body contains IDs of newly created rows
PUT to the specific row's endpoint. Only include the cells you want to change — other cells are left untouched. The row ID comes from the GET rows response.
// PUT .../docs/{docId}/tables/{tableId}/rows/{rowId} { "row": { "cells": [ { "column": "c-colStatus", "value": "Done" }, { "column": "c-colCompleted", "value": "2024-01-25" } ] } } // Response: 202 Accepted
An upsert creates a new row if it doesn't exist, or updates the existing row if it does — determined by a key column you designate. This avoids duplicates when syncing data repeatedly from an external source. It's idempotent and safe to run multiple times.
// Same endpoint as create, add keyColumns to query string POST .../rows?keyColumns=c-colExternalId { "rows": [{ "cells": [ { "column": "c-colExternalId", "value": "CRM-12345" }, { "column": "c-colName", "value": "Acme Corp" }, { "column": "c-colStatus", "value": "Active" } ] }] } // Row with External ID "CRM-12345" exists → UPDATE // No such row → CREATE. Safe to run repeatedly.
// Delete one row — row ID in the URL, no body DELETE .../docs/{docId}/tables/{tableId}/rows/{rowId} // Delete multiple rows in one request POST .../rows/deletes { "rowIds": ["i-row123", "i-row456", "i-row789"] }
Here's a complete working Python script that fetches all rows from a Tasks table — handling pagination — and prints any overdue tasks: rows where Status is not "Done" and Due Date is in the past.
import requests from datetime import date API_KEY = "your_api_key_here" # use os.environ in production DOC_ID = "your_doc_id_here" TABLE_ID = "grid-your_table_id" HEADERS = {"Authorization": f"Bearer {API_KEY}"} BASE = "https://coda.io/apis/v1" def get_all_rows(doc_id, table_id): rows, token = [], None while True: params = {"limit": 500, "useColumnNames": True} if token: params["pageToken"] = token r = requests.get( f"{BASE}/docs/{doc_id}/tables/{table_id}/rows", headers=HEADERS, params=params) r.raise_for_status() data = r.json() rows.extend(data["items"]) token = data.get("nextPageToken") if not token: break return rows rows = get_all_rows(DOC_ID, TABLE_ID) today = date.today().isoformat() print("Overdue tasks:") for row in rows: vals = row["values"] status = vals.get("Status", "") due_date = vals.get("Due Date", "") if status != "Done" and due_date and due_date < today: print(f" - {row['name']} (due {due_date})")
useColumnNames=True to the request means the values object uses human-readable names as keys ("Status", "Due Date") instead of column IDs ("c-xyz123"). This makes scripts easier to read, but remember column names can change — column IDs never do.
The Coda API enforces rate limits of 10 requests per second and 100 requests per minute. Exceeding these returns a 429 Too Many Requests response. Your code must handle 429s with exponential backoff — wait 2n seconds before retrying, where n is the attempt number.
import time, requests def api_request_with_retry(url, headers, params=None, max_retries=5): for attempt in range(max_retries): r = requests.get(url, headers=headers, params=params) if r.status_code == 200: return r.json() elif r.status_code == 429: wait = 2 ** attempt # 1s → 2s → 4s → 8s → 16s print(f"Rate limited. Waiting {wait}s (attempt {attempt+1}/{max_retries})") time.sleep(wait) else: r.raise_for_status() raise Exception("Max retries exceeded")
Create up to 500 rows in a single POST. One API call instead of 500. Always batch write operations — it's the single biggest optimization available.
Fetch column IDs once and store them as constants. Don't re-fetch on every run — it wastes calls and slows scripts. Column IDs never change.
For any recurring sync job, always use upsert (keyColumns) rather than create. It's idempotent — safe to run multiple times without creating duplicates.
Never assume your table has fewer than 500 rows. Always implement the nextPageToken loop — even if the table is small today, it may grow.