This script calls analytics.pipeline to get stage counts and total values, formats a readable summary, and sends it to a Slack channel via webhook. Run it daily on cron or as a scheduled GitHub Action.
The script
#!/usr/bin/env python3
"""
Daily pipeline digest — posts pipeline summary to Slack.
Reads SUPERSONIC_API_KEY, SLACK_WEBHOOK_URL, and PIPELINE_LIST_ID from env.
"""
import os
import httpx
API_URL = "https://mcp.supersonic.cv/api/developers/mcp/call/"
API_KEY = os.environ["SUPERSONIC_API_KEY"]
SLACK_WEBHOOK_URL = os.environ["SLACK_WEBHOOK_URL"]
PIPELINE_LIST_ID = os.environ["PIPELINE_LIST_ID"]
def get_pipeline_analytics() -> dict:
resp = httpx.post(
API_URL,
headers={
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json",
},
json={
"tool": "analytics.pipeline",
"params": {"list_id": PIPELINE_LIST_ID},
},
timeout=15.0,
)
resp.raise_for_status()
return resp.json()
def format_digest(data: dict) -> str:
stages = data.get("stages", [])
total_count = sum(s.get("count", 0) for s in stages)
total_value = sum(s.get("total_value", 0) for s in stages)
lines = [f"*Pipeline Digest* — {total_count} deals, ${total_value:,.0f} total\n"]
for stage in stages:
name = stage.get("name", "Unknown")
count = stage.get("count", 0)
value = stage.get("total_value", 0)
lines.append(f" {name}: {count} deals (${value:,.0f})")
return "\n".join(lines)
def post_to_slack(message: str):
resp = httpx.post(
SLACK_WEBHOOK_URL,
json={"text": message},
timeout=10.0,
)
resp.raise_for_status()
def main():
data = get_pipeline_analytics()
message = format_digest(data)
post_to_slack(message)
print("Digest posted to Slack.")
if __name__ == "__main__":
main()
Setup
Set environment variables
export SUPERSONIC_API_KEY="supersonic_live_YOUR_KEY"
export SLACK_WEBHOOK_URL="https://hooks.slack.com/services/T00000/B00000/XXXX"
export PIPELINE_LIST_ID="your-pipeline-list-id"
To find your PIPELINE_LIST_ID, run:npx supersonic-cli lists list
Test it
python pipeline_digest.py
You should see the summary in your Slack channel.Schedule with cron
Run daily at 9 AM:0 9 * * * SUPERSONIC_API_KEY=supersonic_live_YOUR_KEY SLACK_WEBHOOK_URL=https://hooks.slack.com/services/... PIPELINE_LIST_ID=your-id /usr/bin/python3 /path/to/pipeline_digest.py
GitHub Actions version
# .github/workflows/pipeline-digest.yml
name: Daily Pipeline Digest
on:
schedule:
- cron: '0 9 * * *' # 9 AM UTC daily
workflow_dispatch: {}
jobs:
digest:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.12'
- run: pip install httpx
- run: python pipeline_digest.py
env:
SUPERSONIC_API_KEY: ${{ secrets.SUPERSONIC_API_KEY }}
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
PIPELINE_LIST_ID: ${{ secrets.PIPELINE_LIST_ID }}
The analytics.pipeline tool returns aggregate data, not individual records. It counts toward the 1,000 calls/min rate limit like any other call.
Extending the digest
Add more data by calling additional tools. For example, pull recently closed deals:
def get_recent_closed(days: int = 1) -> list:
resp = httpx.post(
API_URL,
headers={
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json",
},
json={
"tool": "lists.entries",
"params": {
"list_id": PIPELINE_LIST_ID,
"filters": {"Stage": "Closed Won"},
},
},
timeout=15.0,
)
resp.raise_for_status()
return resp.json().get("entries", [])
Append the closed deals to the Slack message for a complete daily summary.