MICROSIVE
All posts
Automation·Mar 28, 2026·12 min

n8n vs Zapier vs Make: real numbers from 23 client projects.

Cost, reliability, dev velocity, and the one factor nobody talks about: who maintains it after you leave.

We've built automations on n8n, Zapier, and Make across 23 client projects over the last three years.

This post is not a feature comparison. There are dozens of those. This post is about what we actually saw in production: what broke, what scaled, what the clients could maintain after we handed it over, and what it cost.

The tl;dr: the right tool depends almost entirely on who's going to own it after you build it. Everything else is secondary.

The projects

23 projects across three years. Roughly:

9 on Zapier → 8 on n8n (self-hosted or cloud) → 6 on Make (formerly Integromat)

Ranging from simple three-step automations (lead capture to CRM) to complex multi-day pipelines (document processing, reconciliation, multi-system orchestration).

Most of these projects also involved significant custom code at some step. We are not purely no-code/low-code shops — we use these tools where they add speed and readability, and write custom workers where they don't.

Zapier

What it's actually for: Teams that will own and modify the automation themselves, without a developer present.

Zapier's interface is the most readable of the three. Non-technical operators can look at a Zap and understand what it does. They can add steps, change the mapping, troubleshoot an error without help. This is a genuinely rare capability and it's why Zapier's pricing, which is the highest of the three, is often justified.

Where it breaks down:

Conditional logic. Zapier's built-in branching is limited. More-than-two-branch logic gets unwieldy quickly. We've had clients try to build what was effectively a decision tree in Zapier and end up with eight separate Zaps that have implicit dependencies on each other and no obvious way to see the relationship between them.

Volume. Zapier tasks are priced per task. At 10,000 tasks a month, Zapier is fine. At 500,000 tasks a month, Zapier's pricing becomes significant and the conversation about switching tools becomes urgent. Several clients have discovered this after the automation was built and the usage was much higher than anticipated.

Error handling. Zapier's error reporting is getting better but it's still not great. When a step fails, the error is logged, you get an email, and you have to go find the failed task. On high-volume automations, this creates noise. We've had clients wake up to 200 failure notification emails because one mapping broke overnight.

Real numbers from our Zapier projects:

Average monthly task volume across 9 projects: ~85,000 tasks. Average monthly Zapier cost: ~$240. Average number of automation errors per month that required human intervention: 4. Average time-to-fix for a broken Zap: 45 minutes (including the time for a non-technical operator to diagnose and escalate).

Make

What it's actually for: Complex logic built by someone who's comfortable with visual programming, where the client has at least one moderately technical operator.

Make's visual editor shows the full flow as a connected graph. Unlike Zapier's list-of-steps format, Make lets you see branches, routers, and iterators as actual shapes. For flows with real complexity — conditional routing, looping, error handling branches — Make is significantly more readable than Zapier.

Make is also substantially cheaper than Zapier for equivalent volume.

Where it breaks down:

The learning curve is real. Make's interface is more powerful but harder to learn. Non-technical operators who can maintain a Zapier workflow often can't maintain an equivalent Make scenario without training. If the client's internal team is going to own this, they need to commit to learning Make's concepts. Some clients won't or can't do that.

The module library. Make has a large library of pre-built integrations, but it's less mature than Zapier's in several areas. We've hit gaps with legacy Indian enterprise software integrations that required us to use Make's HTTP module and build the integration manually. That's not a dealbreaker for a developer, but it adds time.

Error recovery. Make's error handling is better than Zapier's but still requires explicit construction. You have to build error routes and decide what happens when a step fails. That's the right design, but it means error handling is work you have to plan for, not a feature you get automatically.

Real numbers from our Make projects:

Average monthly operations across 6 projects: ~320,000 operations. Average monthly cost: ~$95. Average number of scenario errors per month requiring intervention: 7. (Higher than Zapier, but the errors were more specific and easier to diagnose.) Average time-to-fix: 25 minutes.

n8n

What it's actually for: Complex, high-volume, or sensitive automations where you want full control, self-hosting, and a developer to own and maintain it.

n8n is not a no-code tool despite what the marketing might suggest. It's a low-code tool with a visual interface. The visual interface is good. But the people who build and maintain n8n workflows at scale are developers or technical operators who are comfortable with JSON, regular expressions, and writing JavaScript when the visual nodes aren't enough.

The case for n8n comes down to a few things:

Self-hosting means your data doesn't transit someone else's infrastructure. For clients in healthcare or finance with data residency requirements, this matters. → No per-task pricing. You pay for the server. Volume is free. → Custom code nodes let you drop into JavaScript for the things that no-code can't express cleanly. → Credentials are stored in your own infrastructure, not a third-party SaaS.

Where it breaks down:

Operational overhead. Self-hosted n8n requires someone to maintain the server, monitor uptime, handle upgrades, and manage backups. This is not a significant burden for a team with a DevOps function, but it's real. n8n Cloud removes this cost but reintroduces per-execution pricing at a lower rate than Zapier.

Debugging at scale. n8n's execution logs are good, but filtering and searching executions on a high-volume workflow is slower than it should be. We've had clients where finding a specific failed execution in a log of 50,000 is genuinely painful.

Onboarding non-technical operators. We've had two clients try to hand n8n workflows to non-technical team members after we built them. In both cases, the operators could run the workflows but couldn't meaningfully modify them. With Zapier, they could. This is a real limitation if the client's team doesn't include anyone technical.

Real numbers from our n8n projects:

Average monthly execution volume across 8 projects: ~1.8 million executions. Average monthly infrastructure cost: ~$45 (self-hosted on Railway). Average number of workflow errors per month requiring intervention: 11. Average time-to-fix: 35 minutes (faster diagnosis from execution logs, longer fix time due to code nodes).

The factor nobody talks about: maintenance ownership

Here's the thing the feature comparison tables don't capture.

We've built automations that break six months after we hand them over because the client's team couldn't maintain them.

We've built automations that the client's team modified after we left, broke them, couldn't diagnose the problem, and blamed the tool.

We've built automations that the client's team improved after we left, adding steps and integrations we hadn't built, because the tool was legible enough for them to understand.

The tool choice determines which of these outcomes is likely.

Our current process: before we recommend a tool, we ask two questions.

First: who in your team will own this after we hand it over? Get us a name.

Second: sit that person down with our project manager for thirty minutes and have them look at a sample workflow in each tool. Not to build anything. Just to look and respond. Which one do they understand? Which one makes them nervous?

The answer to those two questions determines the tool recommendation more than any feature comparison.

The actual decision framework

Use Zapier if: → The client's team will own and modify the automation → The team has non-technical operators → Volume is under ~200,000 tasks/month → The integration library covers everything you need → Budget for ongoing SaaS cost is available

Use Make if: → The logic is complex (multi-branch, iterative, conditional) → The client has at least one technically literate operator → Volume is high but not massive → Budget is a constraint vs Zapier

Use n8n if: → A developer will own the workflow permanently → Data residency or privacy requirements favour self-hosting → Volume is very high and per-task pricing would be significant → Custom code nodes are needed for logic that's genuinely complex → The client is comfortable with infrastructure ownership

Use custom code (not these tools) if: → The automation logic is complex enough that the visual editor is hiding the complexity rather than reducing it → Latency requirements are strict (these tools add overhead) → The automation is core to the product, not peripheral to it

One more thing

None of these tools is a bad choice if you pick the right one for the right situation.

All of them are a bad choice if you pick based on what you know rather than what the client needs. We know n8n well. It's tempting to use it everywhere. The times we've done that have produced automation that works perfectly and that clients can't maintain.

The tool serves the client. Not the other way around.

Written by
Microsive Studio