Non-Stop Action on a Budget: GitHub Actions on Oracle Free Tier

Non-Stop Action on a Budget: GitHub Actions on Oracle Free Tier

Running CI/CD on someone else’s computer is convenient until it isn’t. Between minute pricing, queuing during busy periods, and limited customization, GitHub-hosted runners hit limits fast. This post breaks down how I run a self-hosted GitHub Actions fleet on an Oracle Cloud Ampere instance (4 cores, 24GB RAM, 150GB disk) managed entirely through GitOps. The goal: stop paying for CI and run it for free instead, with room to scale to Hetzner Cloud if the workload outgrows the free tier.

The Hardware

The entire setup lives on a single Oracle Cloud Infrastructure (OCI) Always Free Ampere A1 instance:

ResourceSpec
CPU4 Arm cores
RAM24 GB
Disk150 GB
Cost$0 (Always Free tier)

Yes, actually free. Oracle’s free tier Arm instances are genuinely usable for real workloads, though you do trade x86 compatibility for Arm builds. For my use case (mostly Node.js and Rust containers), the performance is excellent.

The Architecture

The Oracle VM doesn’t run a full Kubernetes cluster. It runs a kubelet as a self-managed node. The control plane is managed by CloudFleet in their free tier. The VM just executes workloads; CloudFleet handles the API server, etcd, scheduler, and all the control-plane heavy lifting.

This gives me a hybrid setup:

ComponentProviderRole
Control PlaneCloudFleet (managed, free tier)API server, etcd, scheduler
Worker NodeOracle Cloud Ampere A1 (self-managed)Runs CI pods via kubelet
GitOpsFlux CD (self-managed)Reconciles manifests via HelmRelease resources
┌─────────────────────────────────────────┐
│           CloudFleet (Free Tier)        │
│  ┌─────────────┐  ┌─────────────────┐   │
│  │  API Server │  │  etcd/Scheduler │   │
│  └──────┬──────┘  └─────────────────┘   │
└─────────┼───────────────────────────────┘
          │ Tailscale mesh

┌─────────────────────────────────────────┐
│      Oracle Cloud Ampere A1             │
│  ┌─────────────────────────────────┐    │
│  │         kubelet                 │    │
│  │  ┌─────────┐    ┌────────────┐  │    │
│  │  │ ARC     │    │ Runner Pods│  │    │
│  │  │ Controller│   │ (dind)     │  │    │
│  │  └─────────┘    └────────────┘  │    │
│  └─────────────────────────────────┘    │
│         Ubuntu LTS                      │
└─────────────────────────────────────────┘


┌─────────────────────────────────────────┐
│         GitHub (Actions Jobs)           │
└─────────────────────────────────────────┘

Why This Model?

CloudFleet abstracts away control-plane operations. I get a managed Kubernetes experience without paying for a managed control plane. The Oracle VM provides the compute, CloudFleet provides the orchestration. If the VM dies, I reprovision the node; if the control plane has issues, CloudFleet handles it.

The Runner Meta Chart: Managing Multiple Repos

The official ARC Helm chart deploys one AutoscalingRunnerSet per repository. Maintaining separate HelmRelease manifests for each repository is tedious and error-prone.

I wrote a small custom Helm chart (gha-runner-scale-set-meta) that generates ARC scale sets from a simple repo list. Unspecified values fall back to sensible defaults (minRunners: 0, maxRunners: 3):

repos:
  - name: web-frontend
  - name: api-service
  - name: cli-tool
  - name: docs-site
    minRunners: 0
    maxRunners: 1
  - name: integration-tests
    minRunners: 0
    maxRunners: 2
  - name: terraform-modules
    minRunners: 0
    maxRunners: 1
  - name: container-images
    minRunners: 0
    maxRunners: 1
  - name: shared-libs
    minRunners: 0
    maxRunners: 1

The chart template loops over this list and emits one HelmRelease per repo. Here is a simplified version of the template:

# templates/helmreleases.yaml
{{- range .Values.repos }}
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: {{ .name }}-runner
  namespace: arc-runners
spec:
  interval: 10m
  chart:
    spec:
      chart: gha-runner-scale-set
      version: "0.9.0"
      sourceRef:
        kind: HelmRepository
        name: actions-runner-controller
        namespace: flux-system
  values:
    githubConfigUrl: "https://github.com/your-org/{{ .name }}"
    minRunners: {{ .minRunners | default 0 }}
    maxRunners: {{ .maxRunners | default 3 }}
    runnerGroup: "default"
    containerMode:
      type: dind
    template:
      spec:
        containers:
          - name: runner
            image: ghcr.io/actions/actions-runner:latest
            resources:
              limits:
                cpu: "2"
                memory: "3Gi"
{{- end }}

This template is intentionally simple but designed to grow with your needs. The githubConfigUrl can be parameterized to support different users or organizations, not just a single hardcoded owner. You can also expand the values schema to expose per-repo runnerGroup assignments, custom resource requests and limits, or even machine-shape annotations for workload-aware scheduling. The meta chart starts as a thin wrapper, but it can evolve into a full configuration layer as your requirements get more specific.

Each generated release points at the official gha-runner-scale-set chart with consistent configuration:

  • Docker-in-Docker (dind) mode for true container builds
  • Resource limits: 2 CPU / 3Gi RAM per runner (fits ~2 concurrent runners on the 4c/24GB node)
  • Scale to zero: minRunners: 0 means idle repos spawn no runner pods, so they consume no extra compute beyond the baseline node

Why individual repo runners? ARC supports organization-level runners, which would let you manage a single runner pool for every repo in an organization. The catch: organization-level runners on private repos require a paid GitHub plan. Since most of my repos are private and the whole point of this setup is to avoid recurring costs, paying GitHub for runner access would defeat the purpose. Individual repo runners let me stay on the free plan while still getting self-hosted CI across all repositories. The meta chart is the compromise: a little more YAML, zero subscription fees.

Resource Math

With 4 Arm cores and 24GB RAM, the sizing works out cleanly. The node comfortably handles 2 concurrent standard runners with headroom for system components:

ComponentCPURAMNotes
ARC controller + listeners~500m~256MiLightweight control plane
Standard runner (×2)2 each3Gi eachConcurrent job limit
System overhead~1 core~2Gikubelet, Ubuntu, monitoring

For bursty workloads, maxRunners: 5 queues additional jobs until capacity frees up.

GitOps Everything

All manifests are managed by Flux CD. The CI namespace reconciles via a GitRepository source and Kustomization resources.

The Flux configuration is straightforward:

# flux-system/github-actions.yaml
apiVersion: source.toolkit.fluxcd.io/v1
kind: GitRepository
metadata:
  name: github-actions
  namespace: flux-system
spec:
  interval: 1m
  url: https://github.com/your-org/infra-repo
  ref:
    branch: main
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: github-actions
  namespace: flux-system
spec:
  interval: 10m
  path: ./path/to/manifests
  prune: true
  sourceRef:
    kind: GitRepository
    name: github-actions

Changes to runner configuration follow the standard GitOps flow:

  1. Edit the meta chart values or repo list
  2. Commit and push
  3. Flux detects the change within 1 minute
  4. ARC controller creates/updates/deletes runner scale sets automatically

No kubectl apply, no drift, no “works on my machine.”

What Works Well

  • Zero-cost compute: The Oracle free tier is genuinely usable for this workload
  • Scale-to-zero: Repos with infrequent activity consume no resources
  • Meta chart: Adding a new repo is one line of YAML
  • Flux + ARC: GitOps-native runner management with automatic updates
  • Renovate on self-hosted runners: Dependency updates (including the ARC chart itself) run daily on this same fleet, using itself as the execution environment
  • Arm performance: Node.js and Rust builds are surprisingly fast on Ampere

What’s Tricky

  • Arm compatibility: Not all images have Arm variants. I maintain a few custom runners with pre-installed tools.
  • Single node: No high availability. If the OCI instance goes down, CI stops until it recovers.
  • Disk pressure: 150GB fills up fast with Docker layers and build cache. Automated cleanup is essential.
  • GitHub API limits: ARC polls GitHub for jobs. With many repos, you can hit rate limits; the meta chart helps by centralizing configuration but doesn’t reduce API calls.

The Migration Reality Check

I already had a free tier Oracle instance running when I decided to add it to CloudFleet. It hosted Uptime Kuma, Cloudflare Tunnel, and Tailscale. I used it as a reverse proxy for various services. The plan was simple: reuse the existing machine, install the CloudFleet agent, and start scheduling CI workloads alongside my existing services.

It did not work. CloudFleet apparently runs its own Tailscale mesh internally for node connectivity and management. My existing Tailscale installation conflicted with theirs. Rather than debug the networking overlap, I decided to separate concerns completely.

I migrated the existing services off the machine, destroyed the original instance, and provisioned a fresh Oracle VM dedicated solely to CloudFleet workloads. CloudFleet connected cleanly to the new machine and began scheduling pods immediately.

The lesson: if you already run networking overlays or VPNs on a machine you want to add to CloudFleet, expect conflicts. It is cleaner to start fresh with a dedicated node.

This also reinforced that this is a self-managed node. CloudFleet handles the Kubernetes control plane and scheduling, but I am still responsible for the VM itself. OS updates, security patches, and disk cleanup are on me. The node does not manage itself.

Security on Self-Hosted Runners

Self-hosting CI shifts the security model entirely onto you. GitHub-hosted runners are ephemeral, isolated, and discarded after each job. Your infrastructure is none of those things by default.

The Privilege Problem

ARC lets you replicate the official GitHub-hosted runner environment closely, including the ability to run privileged containers and access host resources. This is powerful for Docker-in-Docker builds, but it is also dangerous. A compromised runner with privileged: true can escape its container, access the host node, and potentially pivot to other namespaces or connected repositories. Do not mount host paths or secrets into runner pods unless absolutely necessary.

Containment and Trust Boundaries

Even within the same CloudFleet-managed cluster, CI workloads run in dedicated namespaces (arc-systems, arc-runners) with network policies and RBAC restrictions. CI workloads are inherently untrusted: they execute arbitrary code from pull requests, and a malicious or compromised dependency can exfiltrate secrets, modify source code, or attack the build environment. On self-hosted runners, that attack surface includes your node and potentially your cluster.

Namespace isolation helps contain the blast radius, but it is not a guarantee. I layer several mitigations on top:

  • Network policies blocking egress from runner pods to internal services
  • No persistent volumes mounted into runner pods
  • Regular node rotation via VM recreation
  • Separate GitHub tokens with minimal scope for ARC registration
  • RBAC restrictions and the principle of least privilege

Scaling Beyond the Free Tier

The Oracle free tier handles my current workload, but it is a single point of failure. When the time comes to scale, CloudFleet supports attaching nodes from multiple cloud providers to the same cluster. You can add Hetzner Cloud or EC2 instances as additional worker nodes without changing your GitOps configuration. CloudFleet handles the lifecycle of these machines, creating and deleting them as needed.

CloudFleet organizes additional capacity into fleets (similar to scaling groups), which define the maximum compute capacity you want available. Nodes within a fleet can have different machine shapes, from a budget CX23 (2 vCPU, 4GB RAM) to a larger CX33 (4 vCPU, 8GB RAM) or dedicated CPU instances. The fleet acts as a ceiling on total compute, not a uniform node pool. This means you can go from a single free Oracle node to a multi-cloud, multi-node CI fleet without changing your manifests. The cluster simply gains more capacity to schedule pods.

Note that CloudFleet’s free tier caps total compute at 24 vCPU. The Oracle Ampere A1 instance already consumes 4 of those, leaving 20 vCPU of headroom within the free tier before you need to upgrade.

The Cost Math: GitHub Actions vs. Hetzner

Right now, the entire CI fleet runs on a single free Oracle node. The question is: when does it make financial sense to add paid capacity?

Compare a Hetzner CX33 (4 vCPU, 8GB, ~$7.99/month after the April 2026 price increase) to a GitHub-hosted Linux 2-core runner at $0.36/hour. The break-even is roughly 22 hours of CI per month. If your repos burn through the included 2,000 GitHub minutes (33 hours) and you need more runtime, a Hetzner node becomes cheaper.

And unlike GitHub’s per-minute billing, the Hetzner node is yours 24/7. You can run multiple concurrent jobs, keep build caches warm, and schedule Renovate or other automation without worrying about metered costs.

One catch: Hetzner bills in full-hour increments. If you spin up a machine for 10 minutes, you pay for the full hour. For cost effectiveness, you should recycle machines rather than spin up fresh ones for short jobs. This makes Hetzner ideal for a persistent runner pool (like ARC’s scale sets) rather than truly ephemeral per-job instances.

Workload-Aware Scheduling

CloudFleet doesn’t just add generic capacity. You can annotate workloads to request specific machine shapes. A lightweight microservice might get a CX23, while a heavy integration test suite gets a CX33 or dedicated CPU instance. If you don’t specify an annotation, CloudFleet falls back to your resource requests and limits to pick an appropriate shape. If neither is provided, it chooses the cheapest option. This means you don’t have to manually manage node pools or taints. Just declare what you need, and CloudFleet provisions the right hardware.

The Hybrid Workload Split

The real architectural win is splitting persistent baseline services from bursty CI workloads. The Oracle node runs 24/7 and hosts the always-on infrastructure:

  • ARC controller and listeners (must poll GitHub continuously for job events)
  • Test databases and caches (PostgreSQL, Redis, or MinIO for integration tests)
  • Build artifact caches (sccache or registry proxies to speed up repeat builds)
  • Flux CD and monitoring (the GitOps agents and lightweight Prometheus stack)

These are lightweight, persistent services that cost nothing extra on the Oracle node but would burn through metered GitHub minutes if they ran on hosted runners.

When a heavy CI job fires (e.g., a full Rust release build, Docker image compilation, or parallel test matrix), CloudFleet can spin up Hetzner nodes to handle the burst. The job schedules on the fresh capacity, runs, and the node can be recycled. You get a free baseline with elastic burst capacity, rather than paying for compute that sits idle 90% of the time.

The Single Node Risk

The Oracle free tier is genuinely free, but it’s a single node. If it goes down, your persistent services and baseline CI capacity disappear. Adding a paid Hetzner node pool to the same CloudFleet cluster provides redundancy and scaling without giving up the free Oracle node. The cluster can tolerate losing either provider and still schedule workloads. This is the difference between a toy homelab and a production-ready setup.

The Config

If you want to replicate this setup, the key Flux CD and ARC resources are:

Conclusion

Self-hosting CI doesn’t have to mean babysitting Jenkins. With GitHub ARC, Flux CD, and a small custom chart, I get elastic, GitOps-managed runners on free hardware.

If you’re running Oracle’s free tier and want CI that scales to zero, this pattern works.


Hardware: Oracle Cloud Ampere A1 (4c/24GB/150GB) | Control Plane: CloudFleet (free tier) | OS: Ubuntu LTS | GitOps: Flux CD | CI: GitHub Actions (ARC)