Celery Task Monitoring: Know When Your Workers Go Silent

Celery is great at running tasks. It's silent when they stop. Add monitoring in two lines and know immediately when something goes wrong — whether a task didn't run, took too long, or ran but processed nothing.

What breaks when Celery tasks aren't monitored

What to monitor in Celery

Flower and Prometheus/Grafana are great for real-time dashboards. But they don't alert you when a periodic task simply stops running. For that you need a dead man's switch: the task pings a URL when it completes, and you're alerted if the ping doesn't arrive.

The three things worth monitoring per periodic task:

Integrating DeadManCheck with Celery

The simplest approach: ping at the end of each periodic task. Create one monitor per task at deadmancheck.io, set the expected interval to match your Beat schedule, and add the ping.

Basic ping (heartbeat)

# tasks.py
import requests
from celery import shared_task

@shared_task
def sync_orders():
    orders = fetch_orders_from_api()
    save_to_db(orders)

    # Ping DeadManCheck on success
    requests.get(
        "https://deadmancheck.io/ping/YOUR-TOKEN",
        params={"count": len(orders)},
        timeout=5
    )

With start/end and error handling

import os, requests
from celery import shared_task

TOKEN = os.environ["DEADMANCHECK_TOKEN"]
BASE = f"https://deadmancheck.io/ping/{TOKEN}"

def _ping(path="", count=None):
    try:
        params = {"count": count} if count is not None else {}
        requests.get(f"{BASE}{path}", params=params, timeout=5)
    except Exception:
        pass # never let monitoring break the task

@shared_task(bind=True, max_retries=3)
def nightly_etl(self):
    _ping("/start")
    try:
        count = run_etl_pipeline()
        _ping(count=count)
    except Exception as exc:
        _ping("/fail")
        raise self.retry(exc=exc)

Reusable decorator for all periodic tasks

import functools, requests, os

def monitor(token):
    base = f"https://deadmancheck.io/ping/{token}"
    def decorator(fn):
        @functools.wraps(fn)
        def wrapper(*args, **kwargs):
            try:
                requests.get(f"{base}/start", timeout=5)
            except Exception: pass
            try:
                result = fn(*args, **kwargs)
                requests.get(base, timeout=5)
                return result
            except Exception as e:
                requests.get(f"{base}/fail", timeout=5)
                raise
        return wrapper
    return decorator

@monitor(os.environ["SYNC_TOKEN"])
@shared_task
def sync_orders():
    ...

Output assertions for Celery tasks

Celery reports task success/failure based on whether the function raised an exception. It has no concept of "this task ran but did nothing useful."

Send a count with your ping and configure an output assertion in DeadManCheck: "alert if count is 0". Now a Celery task that runs but processes zero records triggers an alert — even though Celery says SUCCESS.

See how output assertions work →

Start monitoring free — no credit card needed

Free for 5 monitors. $12/mo for 100. See pricing →