ETL Job Monitoring

Your ETL job completed.
It exported 0 rows.

Data pipelines fail in ways that look like success. Exit 0, ping sent, dashboard green — while your downstream tables sit empty. DeadManCheck catches it.

Start monitoring free → See quickstart
Standard monitoring
✓ ETL ran at 04:00
✓ Exit code 0
✓ Ping received
Rows inserted: 0  ·  upstream API token expired
Alert sent: no
DeadManCheck
✓ ETL ran at 04:00
✓ Exit code 0
✓ Ping received
rows_inserted: 0  ·  assertion: rows_inserted > 0
⚠ Assertion failed — alert sent

How ETL jobs fail silently

API token expired — upstream returned empty response
Job handled the empty response gracefully. Exited 0. Inserted nothing. Your tables are now a day behind.
Schema change upstream broke parsing — 0 records matched
The source added a field. Your parser skipped every row. Exit 0. You won't notice until someone queries stale data.
10× more data than usual — job took 3 hours instead of 10 minutes
The pipeline succeeded. But it's now competing with your morning workload and your users are hitting stale reads.
Deduplication logic removed all rows — net inserts: 0
A logic change meant every incoming record was treated as a duplicate. The job ran fine. Nothing landed in your warehouse.

Monitor your ETL in 3 lines

Send row counts with your ping. Set assertions. Get alerted the moment your pipeline produces nothing.

# Python
import requests
requests.get("https://deadmancheck.io/ping/your-token/start")
result = run_etl_pipeline()
requests.post("https://deadmancheck.io/ping/your-token/end",
json={
"rows_inserted": result.inserted,
"rows_failed": result.failed,
"source": result.source_name
})
# Bash
curl -s https://deadmancheck.io/ping/your-token/start
ROWS=$(python etl.py --count-only)
curl -s -X POST https://deadmancheck.io/ping/your-token/end \
-d "{\"rows_inserted\": $ROWS}"
rows_inserted > 0
rows_failed == 0
source != "empty"

Set these assertions in the dashboard. If any fail, you get alerted immediately.

Everything you need for ETL monitoring

Missing pipeline alerts
Know immediately if your ETL doesn't run on schedule — before downstream queries hit stale data.
Duration monitoring
Automatic baseline learning. Alert when your pipeline takes 10× longer than normal — before it impacts your users.
Did it actually insert any data?
Validate row counts, error counts, source names — anything your ETL produces. Alert the moment your pipeline returns empty.
Works with any pipeline stack
Airflow, dbt, Prefect, Luigi, custom Python/Bash scripts — anything that can make an HTTP request.

Related guides

Feature
Cron job output monitoring
How output assertions work
Feature
Monitor long-running cron jobs
Catch pipelines that take far too long
Use case
Backup monitoring
Monitor backup jobs for silent failures

Stop trusting "exit 0" on your data pipeline

Free plan. 5 monitors. No card required. Set up in 5 minutes.

Start monitoring free →

Or self-host for free on GitHub