I eliminated my dependency on GitLab's CI/CD free tier minutes by deploying self-hosted runners on a Docker Swarm cluster. Then, three hours later, Contabo had an outage. The universe has a sense of humor.
The problem with free tiers
GitLab gives you 400 CI/CD minutes per month on the free tier. That sounds generous until you have 6 groups with active projects, each running lint, test, and build stages on every push. With a test suite that takes 7 minutes and an image build that takes 4, you can blow through those 400 minutes in a week of active development.
The options are predictable: pay for Premium ($29/user/month), buy extra CI minute packs, or — the one nobody talks about — bring your own runners.
I chose option three. Not because I'm cheap (okay, partly), but because I already had the infrastructure sitting there, underutilized.
The setup: 20 minutes to freedom
I run a 3-node Docker Swarm cluster on Contabo. The leader has 8 CPUs and 32GB of RAM, the workers have 4 CPUs and 8GB each. Most of the time, they're running web apps that barely touch 20% CPU. That's a lot of idle compute — perfect for CI jobs.
The entire setup took about 20 minutes:
1. Deploy the runner as a Swarm service
Instead of installing the GitLab runner directly on the host, I deployed it as a Docker service. This gives me automatic restarts, easy updates, and the ability to scale replicas if needed.
docker service create \
--name gitlab-runner \
--mount type=bind,source=/opt/gitlab-runner/config,target=/etc/gitlab-runner \
--mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock \
--constraint 'node.role==manager' \
gitlab/gitlab-runner:latest
The key detail: mounting the Docker socket. This allows the runner to spawn Docker-in-Docker (DinD) containers for each job, which means every CI job runs in isolation.
2. Register runners for each group
GitLab lets you create runners at three levels: instance, group, or project. Group-level is the sweet spot — one runner serves all projects within a group, but different groups stay isolated.
I registered 6 runners via the GitLab API, one per group. Each got its own authentication token. The config ended up looking like this:
concurrent = 8
check_interval = 0
[[runners]]
name = "swarm-runner-zczoft-odoo"
executor = "docker"
[runners.docker]
image = "python:3.12"
privileged = true
volumes = ["/cache", "/var/run/docker.sock:/var/run/docker.sock"]
The privileged = true flag is necessary for Docker-in-Docker. Without it, any job that builds Docker images will fail. Yes, it's a security trade-off — but on infrastructure you control, it's a reasonable one.
3. Disable shared runners
This is the step people forget. If you don't disable GitLab's shared runners at the group level, your jobs might still run on their infrastructure and consume your free minutes. I set each group to disabled_and_unoverridable — meaning no project within the group can accidentally re-enable them.
The first pipeline
The test run was the zczoft-odoo/accounting project — an Odoo module with a 3-stage pipeline: lint, test, and build.
- Lint: 105 seconds — Ruff checking Python code
- Test: 426 seconds — Full test suite with Docker-in-Docker (Odoo + PostgreSQL containers spun up inside the job)
- Build: 219 seconds — Docker image built and pushed to GitLab's container registry
All three stages passed on the first try. The runner picked up the job within seconds. Total pipeline time: about 12 minutes. On GitLab's shared runners, the same pipeline used to take 18-20 minutes because of queue wait times.
That's the hidden benefit: not just unlimited minutes, but faster pipelines. No queue. Your runner is always available because it's yours.
Then the server went down
Three hours after deploying the runner, feeling quite pleased with my infrastructure resilience, Contabo's EU datacenter had an outage. About 45 minutes of downtime. No mention on their status page, naturally.
My runner, my web apps, my Docker Swarm — all of it, gone. For 45 minutes, I had zero CI capability. With GitLab's shared runners, I would have had exactly the same CI capability as before: limited, but available.
The irony wasn't lost on me. I'd spent the morning writing about how self-hosting gives you control and resilience. The universe decided to run a live QA test on that claim.
What the outage actually taught me
Here's the thing, though: the outage didn't change the math. Let me explain.
With shared runners, a 45-minute Contabo outage would have affected my web apps but not my CI. With self-hosted runners, it affected both. That's a real trade-off.
But consider the full picture:
- Contabo outages in the last 6 months: 2 (totaling ~90 minutes)
- Times I hit the 400-minute CI limit and had to wait: every month
- Cost of GitLab Premium for one user: $29/month = $348/year
- Cost of the Contabo VPS that's already running: €0 additional
I'll take 90 minutes of downtime per half-year over a recurring $348/year bill. Especially when those 90 minutes happened during off-hours and affected zero production deployments.
The real cost analysis
Let's talk about what GitLab's free tier actually limits and what it doesn't:
What the free tier gives you (that you keep)
- Unlimited private repositories
- 5GB storage per project
- Container registry (10GB)
- Issue tracking, merge requests, wiki
- CI/CD pipelines (just not the minutes to run them)
What Premium adds (that you probably don't need)
- 10,000 CI/CD minutes — replaced by self-hosted runner
- Code owners — nice, but not essential for small teams
- Protected environments — you can gate deploys other ways
- Merge request approvals — useful, but livable without
The only reason most solo developers or small teams upgrade to Premium is CI minutes. Remove that bottleneck, and the free tier becomes genuinely complete.
What I'd do differently
If I were starting from scratch, here's the setup I'd recommend:
For solo developers
A single €4-5/month VPS (Contabo, Hetzner, Netcup — stay EU for GDPR) running the GitLab runner as a Docker container. No Swarm needed. Set concurrent = 2 so two jobs can run in parallel, and you're set.
For small teams (2-5 devs)
Docker Swarm with 2-3 nodes. Runner on the manager node, concurrent = 4-8 depending on hardware. Consider a second runner on a different provider for redundancy — my Contabo outage would have been a non-issue with a backup runner on Hetzner.
For everyone
- Use group-level runners, not project-level — less config, same isolation
- Always disable shared runners after registering yours — prevent accidental minute consumption
- Set
privileged = trueonly if you need DinD — if you're just running tests without building images, you don't need it - Mount a cache volume — dramatically speeds up repeated builds
The takeaway
Self-hosted runners aren't about being anti-cloud or anti-GitLab. They're about recognizing that CI/CD minutes are the one artificial scarcity in an otherwise generous free tier. If you already have compute sitting around — a VPS, a home server, a Raspberry Pi even — there's no reason to let a monthly minute counter dictate your development pace.
Yes, you take on the responsibility of uptime. Yes, the universe might humble you with an outage three hours after you brag about it. But the math works out. And the speed improvement — no queue, instant job pickup — is something you don't get back once you've tasted it.
My 6 runners across 6 groups have been humming along ever since. Total additional cost: zero. Total CI minutes used this month on GitLab's free tier: also zero.
The free tier, it turns out, is more than enough. You just have to stop playing by the rules they set for it.