Back to guides

How Do I Automate a Weekly SEO Audit That Emails Me the Wins and Losses?

Jake McCluskeyIntermediate45 min read
How Do I Automate a Weekly SEO Audit That Emails Me the Wins and Losses?

Every Monday morning for three years, I opened Google Search Console, then Ahrefs, then a spreadsheet, then a Notion doc. Two hours later I had a weekly SEO report. Most of those two hours were me copying numbers between tabs and noticing the same things I'd noticed last week.

Then I wired it up to Claude. Now a cron job runs at 6am Monday, pulls the data, writes the report, and emails it to me before I'm out of bed. I glance at it with coffee. The two hours are gone. Here's how to build it.

Why this matters

A weekly SEO audit is valuable because it catches decay. Rankings slip, CTR drops, a competitor launches a page that eats your top position. Nobody sees that in daily dashboards — the signal is weekly, at best. Everyone knows they should run weekly audits. Almost nobody does, because the task is boring and slow.

Automation fixes that. Claude isn't replacing the strategic part — it's replacing the "copy numbers from tab to tab" part. You get the signal, you make the call.

For in-house SEOs, agencies, and founders doing their own SEO, this is the single-highest-ROI Claude automation I've built. Maybe forty-five minutes of setup for a few hours saved every week, forever.

Before you start

You need:

  • A Google Search Console property for the site you're auditing.
  • An Ahrefs API plan, or the sub. If you don't have Ahrefs, swap in Semrush, SerpApi, or skip that step — GSC alone still produces a useful report.
  • Claude Code installed — if not, see our macOS setup guide.
  • A way to send email — Resend, Postmark, or a Gmail SMTP app password. Resend is the easiest; free tier sends 100 emails/day.
  • About 45 minutes.

Step 1: Get the GSC data pulling

You're going to run a nightly script that hits the Google Search Console API, pulls the last 7 and 14 days of data, and saves both to disk. The 14-day window lets Claude diff one week against the previous.

First, create a Google Cloud service account and download its JSON key. Enable the Search Console API. Grant the service account email "Full" access to your GSC property (Search Console → Settings → Users and permissions).

Create a project folder:

bash
mkdir -p ~/code/seo-audit && cd ~/code/seo-audit
uv init
uv add google-auth google-api-python-client anthropic resend python-dotenv

Put the service-account JSON in the folder as gsc-credentials.json. Make sure it's in .gitignore.

Create pull_gsc.py:

python
import json
from datetime import datetime, timedelta
from google.oauth2 import service_account
from googleapiclient.discovery import build

SITE = "https://yoursite.com/"  # must match GSC property URL exactly
CREDS = "gsc-credentials.json"
SCOPES = ["https://www.googleapis.com/auth/webmasters.readonly"]


def get_data(days_ago_start: int, days_ago_end: int) -> list[dict]:
    creds = service_account.Credentials.from_service_account_file(
        CREDS, scopes=SCOPES
    )
    service = build("searchconsole", "v1", credentials=creds)

    end = datetime.now() - timedelta(days=days_ago_end)
    start = datetime.now() - timedelta(days=days_ago_start)
    request = {
        "startDate": start.strftime("%Y-%m-%d"),
        "endDate": end.strftime("%Y-%m-%d"),
        "dimensions": ["query", "page"],
        "rowLimit": 1000,
    }
    response = service.searchanalytics().query(
        siteUrl=SITE, body=request
    ).execute()
    return response.get("rows", [])


if __name__ == "__main__":
    current = get_data(days_ago_start=7, days_ago_end=0)
    previous = get_data(days_ago_start=14, days_ago_end=7)
    with open("gsc-current.json", "w") as f:
        json.dump(current, f, indent=2)
    with open("gsc-previous.json", "w") as f:
        json.dump(previous, f, indent=2)
    print(f"Current week: {len(current)} rows")
    print(f"Previous week: {len(previous)} rows")

Run it:

bash
uv run python pull_gsc.py

You should see two files appear: gsc-current.json and gsc-previous.json. If you get a 403, the service account doesn't have access to the GSC property — re-check the permissions step.

Step 2: Pull Ahrefs data (optional)

If you have Ahrefs, they expose a REST API with site-level metrics. Create pull_ahrefs.py:

python
import json
import os
import requests
from dotenv import load_dotenv

load_dotenv()
TOKEN = os.environ["AHREFS_API_KEY"]
TARGET = "yoursite.com"

headers = {"Authorization": f"Bearer {TOKEN}"}

def get_domain_rating():
    r = requests.get(
        "https://api.ahrefs.com/v3/site-explorer/domain-rating",
        headers=headers,
        params={"target": TARGET, "date": "today"},
    )
    return r.json()

def get_organic_keywords():
    r = requests.get(
        "https://api.ahrefs.com/v3/site-explorer/organic-keywords",
        headers=headers,
        params={
            "target": TARGET,
            "limit": 100,
            "order_by": "traffic:desc",
        },
    )
    return r.json()

if __name__ == "__main__":
    data = {
        "domain_rating": get_domain_rating(),
        "top_keywords": get_organic_keywords(),
    }
    with open("ahrefs.json", "w") as f:
        json.dump(data, f, indent=2)
    print("Ahrefs data saved.")

Put the API key in a .env file (also gitignored). Run it.

If you skipped Ahrefs, just don't create this file. The next step handles either case.

Step 3: Feed Claude the data and ask for the report

This is the orchestrator. Create write_report.py:

python
import json
import os
from datetime import datetime
from pathlib import Path
from anthropic import Anthropic
from dotenv import load_dotenv

load_dotenv()
client = Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])


def load_json(path: str) -> str:
    p = Path(path)
    if not p.exists():
        return "not available"
    return p.read_text()


PROMPT_TEMPLATE = """You are writing a weekly SEO report for yoursite.com.
Data sources:

# GSC — current 7 days
{gsc_current}

# GSC — previous 7 days (for comparison)
{gsc_previous}

# Ahrefs
{ahrefs}

Write a Markdown report with this structure:

## Headline
One sentence. The single most important change week-over-week.

## Wins
Up to 5 bullets. Each must reference a specific page or query
and a specific number (impressions, clicks, position).

## Losses
Up to 5 bullets. Same format. Losses are anything that dropped
20%+ in clicks or 3+ positions in ranking.

## What I'd do this week
3 concrete actions, one sentence each. No "monitor" or "review" —
actions must be specific (e.g. "rewrite the H1 on /blog/foo-bar
since it ranks 8 for a query it's better suited to target").

## Cold data table
A Markdown table with the top 10 movers (by absolute change in
clicks), columns: Query, Page, Clicks Δ, Position Δ.

Rules:
- No hype words. No "optimization opportunities." No "leverage."
- Specific numbers. If you don't have the number, don't make one up.
- If a data source is missing, say so at the top and continue.
"""


def main():
    prompt = PROMPT_TEMPLATE.format(
        gsc_current=load_json("gsc-current.json"),
        gsc_previous=load_json("gsc-previous.json"),
        ahrefs=load_json("ahrefs.json"),
    )

    resp = client.messages.create(
        model="claude-sonnet-4-5-20250929",
        max_tokens=4000,
        messages=[{"role": "user", "content": prompt}],
    )

    report = resp.content[0].text
    today = datetime.now().strftime("%Y-%m-%d")
    out = Path(f"reports/seo-{today}.md")
    out.parent.mkdir(exist_ok=True)
    out.write_text(report)
    print(f"Report saved: {out}")


if __name__ == "__main__":
    main()

Run it. You'll get a Markdown file in reports/. Read it. If the voice is off or the structure is wrong, tweak the prompt template and re-run. Two or three iterations dial it in.

Step 4: Email the report

Create send_email.py:

python
import os
from datetime import datetime
from pathlib import Path
import resend
from dotenv import load_dotenv

load_dotenv()
resend.api_key = os.environ["RESEND_API_KEY"]

today = datetime.now().strftime("%Y-%m-%d")
report = Path(f"reports/seo-{today}.md").read_text()

# Very basic Markdown-to-HTML conversion. For a nicer email, pipe
# through a real Markdown library (markdown-it-py, markdown2).
html = "<pre style='font-family: Menlo, monospace; white-space: pre-wrap;'>" \
       + report.replace("<", "&lt;") \
       + "</pre>"

resend.Emails.send({
    "from": "[email protected]",
    "to": "[email protected]",
    "subject": f"SEO weekly — {today}",
    "html": html,
    "text": report,
})
print("Email sent.")

Test it manually:

bash
uv run python send_email.py

Check your inbox.

Step 5: Wire it to cron

Now chain the four scripts and schedule them. Create run_weekly.sh:

bash
#!/bin/bash
cd /Users/you/code/seo-audit
uv run python pull_gsc.py || exit 1
uv run python pull_ahrefs.py || true   # optional source
uv run python write_report.py || exit 1
uv run python send_email.py || exit 1

Make it executable:

bash
chmod +x run_weekly.sh

Add to crontab with crontab -e. For 6am every Monday:

text
0 6 * * 1 /Users/you/code/seo-audit/run_weekly.sh > /Users/you/code/seo-audit/cron.log 2>&1

Run it once manually to make sure cron picks it up right.

Verify it worked

You have three checkpoints.

After Step 3: a Markdown report on disk. Read it. Does it look like something you'd send?

After Step 4: an email in your inbox. Readable? Not spam-filtered?

After Step 5: the next Monday morning, the email arrives without you doing anything. Log files show four successful runs.

Where this breaks

  • GSC data lag. Search Console typically has a 2–3 day delay. "Last 7 days" really means days minus-3 through minus-10. Don't debug "why didn't my launch from yesterday show up" — wait a few days.
  • API quota limits. GSC has per-day quotas. One site audit well under it; ten audits may hit it. Batch or space them.
  • Cron working directory. The cd line in run_weekly.sh is non-negotiable. Cron runs from the user's home directory by default. Without cd, your script can't find the credentials file and fails silently.
  • Email going to spam. Sending from a free gmail account at 6am with Markdown inline gets filtered. Use Resend or Postmark with a verified domain.
  • The prompt drifting off over months. Every time Claude produces a report that's hypey or misses a pattern, add that behavior to the prompt rules. This is a living document. Version the prompt in git.
  • Not handling missing data. On weeks when a client site has a GSC outage or you hit a 429 on Ahrefs, one of the JSON files won't exist. The load_json helper returns "not available" for missing files; the prompt rules tell Claude to carry on. Don't rip that out.

What to try next

Want this built for you instead?

Let's talk about your AI + SEO stack

If you'd rather skip the how-to and have it shipped for you, that's what I do. Start a conversation and we'll figure out the fastest path to results.

Let's Talk
Questions from readers

Frequently asked

Can I do this with only GSC and no Ahrefs?

Yes. The script handles missing Ahrefs data gracefully — the prompt template tells Claude to note the missing source and continue. You lose backlink and competitor signals, but weekly rank and click trends alone are still valuable.

How do I stop the report from sounding like generic ai slop?

Tight prompt rules: ban hype words, require specific pages and numbers, forbid vague verbs like 'monitor' or 'review.' Every time the output drifts back into jargon, add the offending phrase to the prompt's don't-do list. Treat the prompt as a living document and version it in git.

What if GSC data is delayed and shows a big fake drop?

GSC has a 2-3 day lag, so 'last 7 days' really means days minus-3 through minus-10. Tell Claude this in the prompt, or pull days minus-3 through minus-10 explicitly so the delta comparison is apples to apples. Either works.

Does this work for multiple sites?

Yes — loop over a list of site URLs and save each report separately. Batch the email at the end so you get one email with all sites, instead of ten. For agency use, combining this with the Batch API halves the Anthropic bill, too.

What if cron doesn't fire at all?

Common cause: the working directory. Cron runs from the user's home folder, so without the explicit `cd` line at the top of the shell script, your credentials file is never found. Always redirect cron output to a log file and check it after the first scheduled run.