Skip to main content
This walkthrough reads a CSV of contacts, converts it to the format Supersonic expects, and imports them using records.bulk_create. The tool accepts up to 100 records per call.

Prerequisites

You need a Contacts object type in your workspace. If you don’t have one, create it first:
npx supersonic-cli objects create \
  --name "Contacts" \
  --slug "contacts" \
  --fields '[
    {"name": "Name", "field_type": "text", "required": true},
    {"name": "Email", "field_type": "text"},
    {"name": "Company", "field_type": "text"},
    {"name": "Title", "field_type": "text"},
    {"name": "Phone", "field_type": "text"}
  ]'

Prepare your CSV

Your CSV should have headers that match your field names. Example contacts.csv:
Name,Email,Company,Title,Phone
Jane Doe,jane@acme.com,Acme Corp,VP Sales,+1-555-0101
Bob Smith,bob@globex.com,Globex Inc,CTO,+1-555-0102
Alice Chen,alice@initech.com,Initech,Head of Product,+1-555-0103

Python import script

This script reads the CSV and sends batches of 100 to the API.
import csv
import json
import httpx

API_URL = "https://mcp.supersonic.cv/api/developers/mcp/call/"
API_KEY = "supersonic_live_YOUR_KEY"
OBJECT_TYPE_SLUG = "contacts"
BATCH_SIZE = 100

def load_csv(path: str) -> list[dict]:
    with open(path, newline="", encoding="utf-8") as f:
        reader = csv.DictReader(f)
        return [row for row in reader]

def import_batch(client: httpx.Client, records: list[dict]) -> dict:
    resp = client.post(
        API_URL,
        json={
            "tool": "records.bulk_create",
            "params": {
                "object_type_slug": OBJECT_TYPE_SLUG,
                "records": [{"data": record} for record in records],
            },
        },
    )
    resp.raise_for_status()
    return resp.json()

def main():
    rows = load_csv("contacts.csv")
    print(f"Loaded {len(rows)} contacts from CSV")

    client = httpx.Client(
        headers={
            "Authorization": f"Bearer {API_KEY}",
            "Content-Type": "application/json",
        },
        timeout=30.0,
    )

    imported = 0
    for i in range(0, len(rows), BATCH_SIZE):
        batch = rows[i : i + BATCH_SIZE]
        result = import_batch(client, batch)
        created = result.get("created", len(batch))
        imported += created
        print(f"Batch {i // BATCH_SIZE + 1}: imported {created} records")

    print(f"Done. {imported} total contacts imported.")

if __name__ == "__main__":
    main()
Run it:
pip install httpx
python import_contacts.py

CLI version

If your data is already in a JSON file, you can use the CLI directly.
1

Convert CSV to JSON

Create a file called contacts.json:
[
  {"data": {"Name": "Jane Doe", "Email": "jane@acme.com", "Company": "Acme Corp", "Title": "VP Sales", "Phone": "+1-555-0101"}},
  {"data": {"Name": "Bob Smith", "Email": "bob@globex.com", "Company": "Globex Inc", "Title": "CTO", "Phone": "+1-555-0102"}},
  {"data": {"Name": "Alice Chen", "Email": "alice@initech.com", "Company": "Initech", "Title": "Head of Product", "Phone": "+1-555-0103"}}
]
2

Run the bulk create

npx supersonic-cli records bulk-create \
  --object-type-slug "contacts" \
  --file contacts.json
3

Verify the import

npx supersonic-cli records list --object-type-slug "contacts" --limit 10
records.bulk_create accepts a maximum of 100 records per call. For larger imports, split into batches as the Python script does.
Field names in your CSV must match the field names on your object type exactly, including capitalization. If your CSV has email but the field is Email, the data won’t map.

Handling duplicates

records.bulk_create does not deduplicate. If you run the import twice, you’ll get duplicate records. To avoid this, query existing records first:
def get_existing_emails(client: httpx.Client) -> set[str]:
    resp = client.post(
        API_URL,
        json={
            "tool": "records.list",
            "params": {
                "object_type_slug": OBJECT_TYPE_SLUG,
                "limit": 1000,
            },
        },
    )
    resp.raise_for_status()
    records = resp.json().get("records", [])
    return {r["data"].get("Email") for r in records if r["data"].get("Email")}

# In main(), before importing:
existing = get_existing_emails(client)
rows = [r for r in rows if r.get("Email") not in existing]
print(f"{len(rows)} new contacts to import (skipped duplicates)")