You deploy your app to Vercel. Everything works great. Then you need a small recurring task — send a weekly digest, clean up expired tokens, hit a third-party webhook on a schedule. "I'll just add a cron job," you think. It'll take ten minutes.
An hour later you're staring at logs trying to figure out why your function fired three times in a row, or why it ran fine in staging but silently skipped production last night.
If you've been there, you're not alone. Scheduled jobs are one of those things that feel simple until they aren't.
Why serverless makes this harder than it looks
Traditional servers run continuously. You start a cron daemon, and it sits there, watches the clock, and runs your task when it's time. Simple and reliable.
Serverless functions work differently. They start when a request comes in, handle it, and shut down. There's no persistent process keeping time in the background.
Platforms like Vercel, Railway, and Render have added cron support to fill this gap. You configure a schedule, and they send an HTTP request to your function at the right time. Easy enough — until things go wrong:
- Your function times out halfway through, and the job silently fails with no record of what happened
- The platform fires the HTTP request twice — network retry, platform restart, a brief hiccup — and your job runs twice
- You have no visibility into whether last night's cleanup actually finished
- Your endpoint URL changes during a deployment and nobody notices for days
The duplicate execution problem
The double-fire issue is the most dangerous one. Cron-over-HTTP has no built-in "exactly once" guarantee. If the request gets retried — and it will, eventually — your job runs twice.
For idempotent jobs like "check if anything is overdue," running twice is just wasteful. For "send the welcome email" or "process the subscription renewal," it's a real bug.
The standard fix is a queue with atomic dequeue semantics. When a job is due, the scheduler pushes exactly one message. Workers pull from the queue using a blocking atomic pop — whoever gets there first receives the message, and it's gone. There's nothing left for the other workers to pick up.
Redis's BRPOP is atomic: it blocks until a message arrives, then removes it and returns it to exactly one caller. Ten workers can be waiting on the same queue — only one of them will ever see any given job execution.
That's how Tickstem handles it. One message per execution, one worker per message. No coordination logic needed in your application code.
The visibility gap
The second problem is quieter but just as frustrating: you usually have no idea what your cron jobs are actually doing.
Most platform cron implementations fire and forget. They send the HTTP request. Whether it succeeded, how long it took, what your server responded — that's on you to log and correlate.
When a job fails, you want to know: when did it run, what did it get back, how long did it take, did it fail once or ten times in a row? Without that, debugging means digging through scattered logs and hoping you captured the right context at the right time.
Every Tickstem execution records the full picture — request headers, response status, response body, duration, and whether it was scheduled on time or ran late. You can inspect any execution in the dashboard or query the history via API.
What we built
Tickstem is an HTTP job scheduler. You give it a URL and a cron schedule. It fires POST requests on time, logs every execution, and sends you an email alert if a job starts failing consistently — before your users notice.
It's not magic. It's the pattern most developers end up building themselves when the platform's built-in cron isn't reliable enough — except you don't have to build it.
Works on Vercel, Railway, Render, Fly.io, or anywhere your backend lives. No SDK required — plain HTTP. If you want a Go-native integration, the SDK handles registration and execution history in a few lines.
Try Tickstem free
1,000 executions a month. No credit card required. Set up in two minutes.
Get started →