How to Automatically Monitor for Broken Links (Without Screaming Frog)
Most broken link guides focus on the audit: run a crawl, find the broken links, fix them. That’s the right starting point. But it treats broken links as a one-time problem rather than the continuous, accumulating issue they actually are.
Broken links don’t arrive in batches during scheduled audits. They appear the moment you change a URL slug, delete a page, or an external resource goes offline. Between your last audit and your next one, broken links are sitting on your site, wasting crawl budget, leaking PageRank, and frustrating users - and you don’t know about any of it.
The case for automated monitoring isn’t about replacing audits. It’s about shrinking the window between a broken link appearing and you fixing it from months to hours.
Why Manual Audits Aren’t Enough
Screaming Frog is an excellent crawler and a standard tool in any SEO toolkit. But it has a fundamental limitation for ongoing monitoring: it runs when you run it.
The problem with on-demand-only crawls:
Consider a typical content team workflow: publishing 2-3 blog posts per week, occasional URL slug edits for SEO, periodic content consolidation, and regular link updates. Each of these activities can introduce broken links. Between a monthly Screaming Frog run and the next, dozens of broken links can accumulate.
The average time-to-discovery for a broken link, with only monthly audits: 15 days.
During those 15 days:
- Googlebot may crawl the broken links multiple times, wasting crawl budget each time
- Any users following those links hit 404 pages
- If the broken page had backlinks from external sources, that equity is leaking
Screaming Frog’s specific gaps for monitoring:
- It’s a desktop application - it runs on-demand, not on a schedule
- It doesn’t send alerts; you have to go looking
- It can’t run in the background while your team works
- Large sites take significant time to crawl, so it often runs less frequently than needed
- It has no integrations with alerting tools unless you script around it
None of this makes Screaming Frog bad. It’s the right tool for deep one-time audits, pre-launch checks, and technical SEO investigations. It’s the wrong tool for continuous monitoring.
Method 1: Automated Site Monitoring with redCacti
The most complete solution for ongoing broken link monitoring is a cloud-based site crawler that runs on your schedule and alerts you when it finds problems.
Setting up redCacti monitoring:
-
Create a free account at redcacti.com/auth/register
-
Add your site
- Click Add Site in the dashboard
- Enter your domain
- Select your sitemap URL (or let redCacti auto-discover it)
-
Configure your crawl schedule
- For content sites publishing weekly: set weekly crawls
- For high-volume sites or after a major change: set daily crawls
- For stable, low-update sites: bi-weekly is sufficient
-
What gets monitored automatically:
- Every URL in your sitemap (checked each crawl)
- All internal links found during crawl
- HTTP status for every destination URL (4xx, 5xx, redirects, timeouts)
- Changes in broken link count compared to previous crawl (delta alerting)
-
Reading the broken links report:
- Source page - where the broken link lives on your site
- Broken URL - the destination returning an error
- Status code - 404, 500, redirect loop, etc.
- Anchor text - helps you find the link in the editor quickly
- First detected - when the broken link first appeared (helps prioritize recent vs old)
Why cloud-based monitoring beats a local desktop tool for ongoing monitoring:
- Runs automatically on schedule - no one needs to remember to run it
- Crawls from external IPs - sees what Google and real users see, not what’s inside your network
- Persists history - you can see when a broken link first appeared and track trends
- Sends alerts - broken links are pushed to you rather than waiting for you to look
Method 2: Google Search Console Coverage Alerts
Google Search Console is free, already knows about broken links Googlebot has encountered, and can email you when new issues appear. It’s not a replacement for a dedicated crawler (it only shows what Google has already found), but it’s a zero-setup first layer of protection.
Enabling GSC email alerts:
- Open Google Search Console
- Click the bell icon (notifications) in the top-right
- Click Manage notifications
- Ensure Coverage issues (or Indexing issues) is enabled
- Set your notification email
GSC will now email you when new 404 errors appear in Googlebot’s crawl data. The delay is typically 1-7 days - it’s not instant, because GSC reports Googlebot’s crawl data rather than running its own checks in real time.
Monitoring GSC manually without waiting for alerts:
- Go to Indexing -> Pages
- Click Not indexed
- Filter by “404 (Not found)”
- Sort by “Discovered” date to see the newest 404s first
Make this a weekly 5-minute habit: check for new 404s every Monday and add them to your fix queue.
GSC’s limitations for monitoring:
- Only catches broken links that Googlebot has found - misses links on rarely-crawled pages
- Delay between a link breaking and appearing in GSC (1-7 days typically)
- No source-page data for 404s that Googlebot found through external links (only shows 404 URL, not which page it came from)
- Does not detect broken external links (links from your site to other domains)
Method 3: Pre-Publish Link Checks in Your Content Workflow
Automated monitoring catches broken links after they go live. Pre-publish checks prevent them from going live in the first place.
Adding link checking to your content publishing process:
This is especially important for teams where writers or editors create content without an SEO review step. A broken link in a new blog post will be discovered on the next scheduled crawl - but why wait when you can check before publishing?
Browser extension check (60 seconds per post):
Before any content editor hits “Publish”:
- Open the preview/draft URL in Chrome
- Run the Check My Links extension (free, Chrome Web Store)
- Any broken links highlight in red - fix them before publishing
- Takes under 60 seconds per post
For teams using Notion, Google Docs, or other content drafting tools:
External links added during drafting may have been valid when written but broken by publishing time - especially for links to news articles, case studies, or research papers that get moved or paywalled. A pre-publish check catches these.
Adding a broken link check to your content workflow SOP:
If your team has a content checklist (pre-publish review doc), add:
- Open the preview URL in Chrome
- Run Check My Links extension
- Fix any red-highlighted links before publishing
Simple, takes 60 seconds, and eliminates an entire class of new broken links.
Method 4: CI/CD Integration for Developer Teams
For sites with a build/deploy pipeline - Jamstack, static site generators, or any site where content is committed to a repository - broken link checking can be automated at the deployment stage. This prevents broken links from ever reaching production.
Using lychee in GitHub Actions:
name: Broken Link Check
on:
pull_request:
branches: [main]
push:
branches: [main]
jobs:
link-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build site
run: npm run build # or your build command
- name: Check links
uses: lycheeverse/lychee-action@v1
with:
args: >
--verbose
--no-progress
--exclude "linkedin.com"
--exclude "twitter.com"
'./dist/**/*.html'
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
This checks all HTML files in the build output for broken links on every pull request and push to main. The --exclude flags skip social media sites that commonly block automated requests (causing false positives).
Using linkinator in an npm script:
{
"scripts": {
"build": "astro build",
"check-links": "linkinator ./dist --recurse --skip '^https://twitter' --skip '^https://linkedin'",
"build-and-check": "npm run build && npm run check-links"
}
}
Run npm run build-and-check before any deployment to catch broken links at build time.
What CI/CD link checking catches (and misses):
| Scenario | Caught by CI/CD? |
|---|---|
| Broken internal link in new content before deploy | ✅ Yes |
| Broken external link in new content before deploy | ✅ Yes |
| Link that breaks after deploy (external site goes offline) | ❌ No |
| Link that breaks when you delete a page post-deploy | ❌ No |
| Redirect chain introduced by a new redirect rule | Partial - depends on config |
CI/CD link checking is a gate that prevents new broken links from being deployed. It does not monitor for broken links that appear after deployment. For those, you need Methods 1 or 2.
Building a Broken Link Triage Process
Automated monitoring is only useful if the alerts result in action. Without a triage process, broken link notifications become noise that gets ignored.
Weekly triage workflow (15 minutes):
- Monday morning: Review the weekly broken links report (from redCacti or GSC)
- Categorize by urgency:
- Navigation/homepage links -> Fix today
- High-traffic page links -> Fix this week
- Low-traffic page links -> Fix this sprint
- External link errors -> Verify and batch-fix monthly
- Assign fixes: For team environments, assign broken links to the content owner of the source page
- Verify after fixing: After each fix, confirm the link resolves correctly (click it, or recheck with a tool)
Routing alerts to the right people:
| Alert type | Who fixes it |
|---|---|
| Broken internal link on a blog post | Content writer / editor |
| Broken navigation link | Web developer or CMS admin |
| Broken link on a landing page | Marketing or growth team |
| Broken external link | Content writer / SEO |
| 500 server error | Developer / infrastructure |
A broken link report that goes to a single inbox where nobody feels ownership gets ignored. Route alerts to the team that owns the source page.
Comparison: Monitoring Methods Side-by-Side
| Method | Frequency | Setup Time | Cost | What It Misses |
|---|---|---|---|---|
| redCacti automated crawl | Weekly / Daily | 10 min | Free tier available | Nothing on schedule - runs automatically |
| Google Search Console alerts | ~1-7 day delay | 5 min | Free | External broken links, slow-crawled pages |
| Pre-publish browser extension | Per page, pre-publish | 2 min | Free | Post-publish breakage |
| CI/CD link checking | Per deployment | 1-2 hours | Free | Post-deploy breakage |
| Screaming Frog (manual) | When you run it | 30 min | Free (≤500 URLs) or paid | Everything between runs |
The recommended stack for most teams: redCacti weekly crawl (primary) + GSC alerts (free secondary) + pre-publish extension (prevents new broken links). Add CI/CD if you have a build pipeline.
Summary Checklist
Set up automated monitoring:
- Register at redcacti.com and add your site
- Configure weekly crawl schedule
- Enable email alerts for new broken links
Add Google Search Console monitoring:
- Enable Coverage issue notifications in GSC
- Add weekly manual GSC check to calendar (5 minutes, Mondays)
Add pre-publish checks:
- Install Check My Links Chrome extension
- Add link check step to content publishing SOP
- Brief content team on the 60-second pre-publish check
For developer teams:
- Add lychee or linkinator to CI/CD pipeline
- Configure exclusions for social media sites (prevent false positives)
- Set up PR check so broken links block merges to main
Build a triage process:
- Define fix urgency tiers (navigation > high-traffic > low-traffic > external)
- Assign ownership by page type
- Set weekly triage cadence (15 minutes, Monday)
The difference between a site that manages broken links well and one that doesn’t isn’t how often they run audits - it’s whether they’ve made broken link monitoring automatic.
Manual audits are reactive. Automated monitoring is proactive. When a broken link appears on your site, you want to know within hours or days - not when a user complains or when your rankings start to slip.
The setup described above takes under 30 minutes and runs quietly in the background from that point on.
Set up automated broken link monitoring ->
Free to start. Add your site, configure a crawl schedule, and your first broken link report arrives automatically.
Also in this series: How to Find Broken Links on Your Website · How to Audit All Links Before a Website Redesign
Newsletter
Weekly SEO teardowns
Internal linking, broken links & orphan pages — straight to your inbox, every week.