Duration Monitoring
Most tools only check if your job ran. They don't check how long it took. DeadManCheck learns your job's normal runtime and alerts when it drifts — automatically, no thresholds to configure.
Your nightly database backup normally takes 90 seconds. Last night it took 4 hours. It eventually finished — so exit code 0, ping sent, all monitoring dashboards green.
But something was clearly wrong. A slow disk. A table lock. A network bottleneck. The job degraded, silently, while every monitoring tool you have told you everything was fine.
By the time you noticed, three more backups had run equally slowly, your retention window had drifted, and you had no idea when the problem started.
Send a /start ping before your job runs and an /end ping when it completes.
DeadManCheck tracks your job's runtime over time and builds a rolling average — no manual config required.
When a run significantly exceeds the baseline, or your hard max duration, you get alerted immediately.
Whether you're searching for how to detect slow cron jobs, alert when cron job duration increases, or cron job taking too long to complete — this is the page.
Runtime degradation is one of the hardest infrastructure problems to catch because it happens gradually. A job that used to take 30 seconds now takes 5 minutes. No crash, no error, no alert — just a slow creep that compounds over days until something downstream breaks.
DeadManCheck solves this by building a rolling baseline from your job's real execution history. When a run deviates significantly — whether it's a one-off spike or a gradual drift — you get alerted. No manual thresholds to set or maintain.
A backup that takes 10× longer than normal is either failing or your database has grown unexpectedly. Either way — you should know.
A pipeline that suddenly takes twice as long is a sign of upstream data volume changes, slow queries, or memory pressure.
A nightly report that normally takes 3 minutes but is now running for an hour will miss its delivery window. Duration alerts get there first.
A sync that's slowly falling behind schedule shows up as a gradual runtime increase — invisible to standard ping monitoring.
Background workers processing more data than usual or hitting slow external APIs will run long — alerting you before the queue backs up.
A scheduled CI job that suddenly takes 3× longer may indicate a flaky test, infrastructure issue, or unexpected dependency slowdown.
/start endpoint to measure max runtime, but it only alerts if the job doesn't send an /end ping within the window. There's no automatic baseline learning — it won't catch a job that finishes late but within your manual limit.Free plan. 5 monitors. No card required. Takes 5 minutes to set up.
Start monitoring free →