Systemd instead of Cron or Simple Queue
My site displays running data from Strava’s API. Like any static site with dynamic data, I need to fetch updates and rebuild periodically. Here’s how that requirement led me to TIL of systemd features I didn’t know existed.
The Problem
My site shows my running stats, all pulled from Strava’s API. I have a script (let’s call it /usr/local/bin/strava_download) that fetches the data, and since I’m using a Static Site Generator, I need to rebuild the site (let’s call that one /usr/local/bin/build_site) after each update.
The challenge: keeping the data fresh without being wasteful. Rebuild too often and I’m burning CPU for nothing. Rebuild too rarely and my stats are stale.
First Try: Cron
My initial instinct was to reach for cron:
0 * * * * /usr/local/bin/strava_download && /usr/local/bin/build_site
Hourly updates. Works fine, technically acceptable, but it felt crude. Too brittle, too wasteful, and not very elegant.
Second Try: Systemd Timers
Turns out systemd can do the same thing, but with better control and monitoring capabilities.
Created a service unit:
sudo nvim /etc/systemd/system/strava-site.service
[Unit]
Description=Run Strava sync and site deploy
After=network-online.target
Wants=network-online.target
[Service]
Type=oneshot
User=fetsh
# Before starting, systemd creates /run/strava-site with the given mode, owned by User=. Removed when the service stops. We use this for the lock file.
RuntimeDirectory=strava-site
RuntimeDirectoryMode=0755
# Single, non-blocking lock for the entire sequence. flock -n exits with failure if the lock is already held, immediately (no wait). This prevents overlap if the job is triggered again while running.
ExecStart=/usr/bin/flock -n /run/strava-site/lock \
/bin/bash -c '/usr/local/bin/strava_download && /usr/local/bin/build_site'
Nice=10
IOSchedulingClass=best-effort
IOSchedulingPriority=7
PrivateTmp=true
ProtectSystem=full
NoNewPrivileges=true
[Install]
WantedBy=multi-user.target
And a timer unit:
sudo nvim /etc/systemd/system/strava-site.timer
For wall-clock scheduling:
[Unit]
Description=Hourly trigger for our site
[Timer]
OnCalendar=hourly
Persistent=true
AccuracySec=5m
RandomizedDelaySec=60
[Install]
WantedBy=timers.target
Or for relative scheduling (one hour after each successful run):
[Unit]
Description=Hourly trigger for our site
[Timer]
OnBootSec=10min
OnUnitActiveSec=1h
Persistent=true
Unit=strava-site.service
[Install]
WantedBy=timers.target
Activate it:
sudo systemctl daemon-reload
sudo systemctl enable --now strava-site.timer
This was better than cron, but still fundamentally flawed. I was rebuilding my site 24 times a day when I typically only run once. Plus, Strava explicitly recommends using webhooks instead of polling their API.
Third Try: Systemd as a Message Queue
Here’s where it gets interesting. Strava webhooks expect an immediate response, with actual processing happening asynchronously. Normally, this means setting up some queueing system, which in my case is usually Sidekiq (with Redis). But I didn’t want to set up all that infrastructure for such a simple task, so what if systemd could be this queue?
I wrote a small Roda application to receive webhooks and save each update as a JSON file in /var/lib/strava/webhook_flags. Then I let systemd handle the rest.
The Path Unit (Watcher)
sudo nvim /etc/systemd/system/strava-webhook-update.path
[Unit]
Description=Trigger Strava webhook drain when new event files exist
[Path]
# Fires when JSON files appear, re-fires on changes
PathExistsGlob=/var/lib/strava/webhook_flags/*-update.json
Unit=strava-webhook-update.service
[Install]
WantedBy=multi-user.target
The Service Unit (Processor)
This handles the actual processing:
sudo nvim /etc/systemd/system/strava-webhook-update.service
[Unit]
Description=Start Strava site pipeline on update flag
[Service]
Type=oneshot
ExecStartPre=/usr/bin/install -d -m 0755 -o fetsh -g strava-data /var/lib/strava/webhook_flags_done
# Start the main job (protected by flock in the existing unit)
ExecStart=/bin/systemctl start strava-site.service
# After successful start, move processed flags to archive
ExecStartPost=/bin/bash -c 'set -euo pipefail; shopt -s nullglob; for f in /var/lib/strava/webhook_flags/*-update.json; do mv "$f" /var/lib/strava/webhook_flags_done/; done'
The current setup moves webhook files to the „done” folder regardless of whether strava-site.service succeeds. For my use case, this is acceptable.
Activation and Cleanup
Enable the webhook system:
sudo systemctl daemon-reload
sudo systemctl enable --now strava-webhook-update.path
Kill the old timer:
sudo systemctl stop strava-site.timer
sudo systemctl disable strava-site.timer
sudo systemctl daemon-reload
Room for Improvement
Right now my strava-webhook-update.service just calls strava-site.service because that service already existed. Technically, it isn’t really a queue — it’s a trigger + worker setup. The .path unit only wakes up a service when a new file appears, but doesn’t guarantee reliable delivery, ordering, or acknowledgment after success.
To make it behave like a real queue we need to combine these services into a single service, which itself should take a lock, drain all pending files, process them, and only after successful completion move them to a „done” directory. That way, failures leave unprocessed messages in place for the next run — proper at-least-once delivery.
Also, PathExistsGlob might not be the best directive for the path unit. Maybe DirectoryNotEmpty would be more appropriate.
What I’ve learned
- Systemd timers are more powerful than cron with better integration and monitoring
- Path units can watch for filesystem changes, effectively turning directories into message queues
- Flock provides simple but effective concurrency control
I don’t think it is something for high-volume or critical systems, of course, but for a personal site updating running stats systemd is more than enough because it’s reliable, observable (journalctl, systemctl status), and doesn’t require extra daemons or dependencies.