Duration Monitoring

Your cron job didn't fail.
It just took 45 minutes instead of 2.

Most tools only check if your job ran. They don't check how long it took. DeadManCheck learns your job's normal runtime and alerts when it drifts — automatically, no thresholds to configure.

Start free → See quickstart
Standard monitoring
✓ Job ran at 02:00
✓ Exit code 0
✓ Ping received
Runtime: 4h 12m  ·  normally 90 seconds
Alert sent: no
DeadManCheck
✓ Job ran at 02:00
✓ Exit code 0
✓ Ping received
Runtime: 4h 12m  ·  baseline 90s
⚠ Duration anomaly — alert sent

The failure mode nobody talks about

Your nightly database backup normally takes 90 seconds. Last night it took 4 hours. It eventually finished — so exit code 0, ping sent, all monitoring dashboards green.

But something was clearly wrong. A slow disk. A table lock. A network bottleneck. The job degraded, silently, while every monitoring tool you have told you everything was fine.

By the time you noticed, three more backups had run equally slowly, your retention window had drifted, and you had no idea when the problem started.

Standard cron monitoring
✓ Job ran on schedule
✓ Exit code 0
✓ Ping received
Took 4 hours instead of 90 seconds? Silent.
DeadManCheck
✓ Job ran on schedule
✓ Exit code 0
✓ Ping received
⚠ Duration 4h — baseline is 90s → alert sent

How duration monitoring works

1

Wrap your job

Send a /start ping before your job runs and an /end ping when it completes.

2

Baseline is learned

DeadManCheck tracks your job's runtime over time and builds a rolling average — no manual config required.

3

Anomalies are flagged

When a run significantly exceeds the baseline, or your hard max duration, you get alerted immediately.

Setup — two extra curl calls

# Bash
curl -s https://deadmancheck.io/ping/your-token/start
# your job runs here
your-backup-script.sh
curl -s https://deadmancheck.io/ping/your-token/end
# Python
import requests
requests.get("https://deadmancheck.io/ping/your-token/start")
run_your_job()
requests.get("https://deadmancheck.io/ping/your-token/end")

Catch cron jobs that are getting slower

Whether you're searching for how to detect slow cron jobs, alert when cron job duration increases, or cron job taking too long to complete — this is the page.

Runtime degradation is one of the hardest infrastructure problems to catch because it happens gradually. A job that used to take 30 seconds now takes 5 minutes. No crash, no error, no alert — just a slow creep that compounds over days until something downstream breaks.

DeadManCheck solves this by building a rolling baseline from your job's real execution history. When a run deviates significantly — whether it's a one-off spike or a gradual drift — you get alerted. No manual thresholds to set or maintain.

Jobs worth monitoring for duration

Database backups

A backup that takes 10× longer than normal is either failing or your database has grown unexpectedly. Either way — you should know.

ETL / data pipelines

A pipeline that suddenly takes twice as long is a sign of upstream data volume changes, slow queries, or memory pressure.

Report generation

A nightly report that normally takes 3 minutes but is now running for an hour will miss its delivery window. Duration alerts get there first.

Sync jobs

A sync that's slowly falling behind schedule shows up as a gradual runtime increase — invisible to standard ping monitoring.

Queue consumers

Background workers processing more data than usual or hitting slow external APIs will run long — alerting you before the queue backs up.

GitHub Actions workflows

A scheduled CI job that suddenly takes 3× longer may indicate a flaky test, infrastructure issue, or unexpected dependency slowdown.

Why existing tools miss this

Healthchecks.io
Supports a /start endpoint to measure max runtime, but it only alerts if the job doesn't send an /end ping within the window. There's no automatic baseline learning — it won't catch a job that finishes late but within your manual limit.
Cronitor
Tracks execution time per job, but duration alerts require manual threshold configuration — you set the limit, you maintain it. As your jobs evolve, your thresholds go stale.
DeadManCheck
Learns your job's normal runtime automatically. When a run exceeds the rolling average by a configurable margin, you get alerted — no setup, no maintenance.

Related guides

Feature
Cron job output monitoring
Alert when jobs run but produce bad output
Overview
Cron job monitoring
All three failure modes explained
Compare
vs Healthchecks.io
See how duration monitoring compares

Start catching duration anomalies

Free plan. 5 monitors. No card required. Takes 5 minutes to set up.

Start monitoring free →

Or self-host for free on GitHub