Mape
Mape
Blog
·7 min read

How We Built a Blog Automation System That Costs 5 Euros Per Month

PvU
Pepijn van Unen

Why We Built This

Content marketing works, but it is expensive. A decent freelance writer charges EUR 150-300 per blog post. An agency charges EUR 300-800. If you want 8 posts per month, you are looking at EUR 1,200 to EUR 6,400 monthly before you factor in topic research, editing, and publishing.

We wanted to see if we could build a system that produces draft-quality blog posts -- properly researched, SEO-targeted, and ready for human review -- at a fraction of that cost. The answer turned out to be yes, at EUR 4.70 per month for 8 posts.

This post walks through the architecture, the actual costs, and the quality control measures that make it work. We are not sharing client details, but the system runs in production today.

The Architecture

The pipeline has five stages, each handled by a separate component. Everything runs on Google Apps Script (free) with AI API calls for the intelligence layer.

Stage 1: Topic Discovery

The system pulls potential topics from six sources:

  • Google Search Console queries (what people already find the site for)
  • Competitor blog RSS feeds (what topics are getting traction in the niche)
  • Industry news aggregators (trending themes)
  • Keyword research API data (search volume and difficulty scores)
  • Internal content gap analysis (topics the site should cover but does not)
  • Seasonal and calendar-based triggers (industry events, annual cycles)

Each source feeds into a master topic list with metadata: estimated search volume, keyword difficulty, relevance score, and whether the topic has already been covered.

Stage 2: AI Filtering and Prioritization

Not every topic is worth writing about. An AI model reviews the master list and filters based on:

  • Search intent alignment (is someone searching this likely to become a customer?)
  • Content gap opportunity (can we say something the top 5 results do not?)
  • Effort vs. impact (high volume + low difficulty = priority)
  • Topical authority fit (does this strengthen the overall site theme?)

This step reduces a list of 50-100 potential topics down to 8-12 per month. A human reviews the shortlist before generation begins -- this is the one manual checkpoint in the pipeline.

Stage 3: Content Generation

This is the most technically interesting stage. The AI generates each post with a detailed system prompt that enforces specific rules:

Anti-AI-pattern rules. The system prompt explicitly bans phrases like "In today's fast-paced world," "Let's dive in," "game-changer," "leverage," and about 40 other patterns that signal AI-generated content to readers and to Google's spam policies. The prompt instructs the model to write like a practitioner sharing experience, not a marketer filling a word count.

Structure requirements. Each post follows a brief that specifies: target keyword, secondary keywords, required sections, internal linking targets, word count range, and tone guidelines. The brief is generated programmatically from the topic metadata.

Source grounding. The system pulls top-ranking content for the target keyword and includes it as reference context. This means the AI can cite real statistics, reference actual tools, and avoid making claims that contradict established information.

Stage 4: WordPress Draft Publishing

Generated posts are pushed to WordPress as drafts via the REST API. They include:

  • Formatted content with proper heading hierarchy
  • SEO metadata (title tag, meta description, slug)
  • Suggested featured image prompts (actual image generation is separate)
  • Internal links to existing content
  • Category and tag assignments

Posts are never published automatically. They sit in draft status for human review.

Stage 5: Performance Tracking

After a post is published, the system monitors its performance via Google Search Console:

  • Impressions and clicks for target keywords
  • Average position over time
  • Click-through rate compared to position benchmarks

This data feeds back into Stage 1. If a published post ranks for unexpected keywords, those keywords become candidates for new content. If a post underperforms expectations, the system flags it for optimization.

The Actual Cost Breakdown

Here is what this costs to run for 8 posts per month:

| Component | Monthly Cost | |---|---| | AI API calls (topic filtering) | EUR 0.30 | | AI API calls (content generation, ~1,200 words x 8) | EUR 3.20 | | AI API calls (SEO metadata generation) | EUR 0.15 | | Google Apps Script hosting | EUR 0.00 | | Google Search Console API | EUR 0.00 | | WordPress REST API | EUR 0.00 | | Keyword research API (free tier) | EUR 0.00 | | Total | EUR 3.65 - 4.70 |

The range depends on article length and how many revision passes the generation step needs. Some topics require more context, which means longer prompts and higher token costs.

Compare this to alternatives:

| Approach | Monthly Cost (8 posts) | |---|---| | Freelance writers (mid-range) | EUR 1,200 - 2,400 | | Content agency | EUR 2,400 - 6,400 | | In-house writer (partial FTE) | EUR 1,500 - 2,500 | | This system | EUR 3.65 - 4.70 |

The comparison is not entirely fair -- a skilled freelancer produces publish-ready content, while this system produces reviewed drafts. But the review step takes 15-20 minutes per post, not the 3-4 hours it takes to write one from scratch. For ROI calculations on automation projects like this, the numbers are hard to argue with.

Quality Control

Automating content generation without quality control produces garbage. Here is what keeps the output usable:

Human review gate. Nothing publishes without a person reading it. The system is a draft factory, not a publishing bot.

Factual grounding. By feeding real source material into the generation prompt, the system avoids the hallucination problem that plagues zero-context AI writing. Claims are anchored to existing, rankable content.

Readability scoring. Each draft is scored for readability (Flesch-Kincaid) and flagged if it falls outside the target range. Overly complex or suspiciously simple content gets regenerated.

Duplicate content checks. Before generating, the system checks if the target keyword overlaps significantly with existing site content. This prevents keyword cannibalization, where multiple pages compete for the same query.

Style enforcement. The anti-AI-pattern rules are not optional. The generation prompt is the most carefully engineered component of the system, and it gets updated every time we spot a new pattern that reads as obviously machine-generated.

What We Learned

The prompt is the product. 80% of the quality comes from how the generation prompt is engineered. Getting this right took weeks of iteration, not the generation architecture itself.

Topic selection matters more than writing quality. A mediocre post on a topic with genuine search demand outperforms a brilliant post that nobody is searching for. The filtering stage is where most of the value is created.

Monitoring closes the loop. Without performance tracking feeding back into topic discovery, you are flying blind. The system gets smarter over time because it learns which topics actually drive traffic.

This is a clear case where building a custom solution outperformed off-the-shelf tools. There are SaaS products that do parts of this pipeline, but none that combine topic discovery, generation, publishing, and tracking at this cost. The total build took about three weeks, and it paid for itself in the first month.

Want results like this?

Book a free 30 minute call. We'll map your processes and tell you honestly which ones are worth automating.

How We Built a Blog Automation System That Costs 5 Euros Per Month | Mape