<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Oliver Conzen</title><description>Personal website and blog about technology, design, and everything in between.</description><link>https://oliverconzen.de/</link><language>en</language><item><title>Non-Stop Action on a Budget: GitHub Actions on Oracle Free Tier</title><link>https://oliverconzen.de/blog/github-actions-oracle-free-tier/</link><guid isPermaLink="true">https://oliverconzen.de/blog/github-actions-oracle-free-tier/</guid><description>Self-hosted GitHub Actions on Oracle Cloud Free Tier with GitOps. Scale to Hetzner when you outgrow the free tier.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Running CI/CD on someone else&apos;s computer is convenient until it isn&apos;t. Between minute pricing, queuing during busy periods, and limited customization, GitHub-hosted runners hit limits fast. This post breaks down how I run a self-hosted GitHub Actions fleet on an Oracle Cloud Ampere instance (4 cores, 24GB RAM, 150GB disk) managed entirely through GitOps. The goal: stop paying for CI and run it for free instead, with room to scale to Hetzner Cloud if the workload outgrows the free tier.&lt;/p&gt;
&lt;h2&gt;The Hardware&lt;/h2&gt;
&lt;p&gt;The entire setup lives on a single Oracle Cloud Infrastructure (OCI) Always Free Ampere A1 instance:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Resource&lt;/th&gt;
&lt;th&gt;Spec&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;CPU&lt;/td&gt;
&lt;td&gt;4 Arm cores&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RAM&lt;/td&gt;
&lt;td&gt;24 GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Disk&lt;/td&gt;
&lt;td&gt;150 GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cost&lt;/td&gt;
&lt;td&gt;$0 (Always Free tier)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Yes, actually free. Oracle&apos;s free tier Arm instances are genuinely usable for real workloads, though you do trade x86 compatibility for Arm builds. For my use case (mostly Node.js and Rust containers), the performance is excellent.&lt;/p&gt;
&lt;h2&gt;The Architecture&lt;/h2&gt;
&lt;p&gt;The Oracle VM doesn&apos;t run a full Kubernetes cluster. It runs a &lt;strong&gt;kubelet&lt;/strong&gt; as a self-managed node. The control plane is managed by &lt;a href=&quot;https://cloudfleet.ai&quot;&gt;CloudFleet&lt;/a&gt; in their free tier. The VM just executes workloads; CloudFleet handles the API server, etcd, scheduler, and all the control-plane heavy lifting.&lt;/p&gt;
&lt;p&gt;This gives me a hybrid setup:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Component&lt;/th&gt;
&lt;th&gt;Provider&lt;/th&gt;
&lt;th&gt;Role&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Control Plane&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;CloudFleet (managed, free tier)&lt;/td&gt;
&lt;td&gt;API server, etcd, scheduler&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Worker Node&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Oracle Cloud Ampere A1 (self-managed)&lt;/td&gt;
&lt;td&gt;Runs CI pods via kubelet&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;GitOps&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://fluxcd.io&quot;&gt;Flux CD&lt;/a&gt; (self-managed)&lt;/td&gt;
&lt;td&gt;Reconciles manifests via &lt;a href=&quot;https://fluxcd.io/flux/components/helm/helmreleases/&quot;&gt;HelmRelease&lt;/a&gt; resources&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;pre&gt;&lt;code&gt;┌─────────────────────────────────────────┐
│           CloudFleet (Free Tier)        │
│  ┌─────────────┐  ┌─────────────────┐   │
│  │  API Server │  │  etcd/Scheduler │   │
│  └──────┬──────┘  └─────────────────┘   │
└─────────┼───────────────────────────────┘
          │ Tailscale mesh
          ▼
┌─────────────────────────────────────────┐
│      Oracle Cloud Ampere A1             │
│  ┌─────────────────────────────────┐    │
│  │         kubelet                 │    │
│  │  ┌─────────┐    ┌────────────┐  │    │
│  │  │ ARC     │    │ Runner Pods│  │    │
│  │  │ Controller│   │ (dind)     │  │    │
│  │  └─────────┘    └────────────┘  │    │
│  └─────────────────────────────────┘    │
│         Ubuntu LTS                      │
└─────────────────────────────────────────┘
          │
          ▼
┌─────────────────────────────────────────┐
│         GitHub (Actions Jobs)           │
└─────────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Why This Model?&lt;/h3&gt;
&lt;p&gt;CloudFleet abstracts away control-plane operations. I get a managed Kubernetes experience without paying for a managed control plane. The Oracle VM provides the compute, CloudFleet provides the orchestration. If the VM dies, I reprovision the node; if the control plane has issues, CloudFleet handles it.&lt;/p&gt;
&lt;h2&gt;The Runner Meta Chart: Managing Multiple Repos&lt;/h2&gt;
&lt;p&gt;The official &lt;a href=&quot;https://github.com/actions/actions-runner-controller/tree/master/charts&quot;&gt;ARC Helm chart&lt;/a&gt; deploys one &lt;code&gt;AutoscalingRunnerSet&lt;/code&gt; per repository. Maintaining separate &lt;a href=&quot;https://fluxcd.io/flux/components/helm/helmreleases/&quot;&gt;HelmRelease&lt;/a&gt; manifests for each repository is tedious and error-prone.&lt;/p&gt;
&lt;p&gt;I wrote a small custom Helm chart (&lt;code&gt;gha-runner-scale-set-meta&lt;/code&gt;) that generates ARC scale sets from a simple repo list. Unspecified values fall back to sensible defaults (&lt;code&gt;minRunners: 0&lt;/code&gt;, &lt;code&gt;maxRunners: 3&lt;/code&gt;):&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;repos:
  - name: web-frontend
  - name: api-service
  - name: cli-tool
  - name: docs-site
    minRunners: 0
    maxRunners: 1
  - name: integration-tests
    minRunners: 0
    maxRunners: 2
  - name: terraform-modules
    minRunners: 0
    maxRunners: 1
  - name: container-images
    minRunners: 0
    maxRunners: 1
  - name: shared-libs
    minRunners: 0
    maxRunners: 1
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The chart template loops over this list and emits one &lt;code&gt;HelmRelease&lt;/code&gt; per repo. Here is a simplified version of the template:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# templates/helmreleases.yaml
{{- range .Values.repos }}
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: {{ .name }}-runner
  namespace: arc-runners
spec:
  interval: 10m
  chart:
    spec:
      chart: gha-runner-scale-set
      version: &quot;0.9.0&quot;
      sourceRef:
        kind: HelmRepository
        name: actions-runner-controller
        namespace: flux-system
  values:
    githubConfigUrl: &quot;https://github.com/your-org/{{ .name }}&quot;
    minRunners: {{ .minRunners | default 0 }}
    maxRunners: {{ .maxRunners | default 3 }}
    runnerGroup: &quot;default&quot;
    containerMode:
      type: dind
    template:
      spec:
        containers:
          - name: runner
            image: ghcr.io/actions/actions-runner:latest
            resources:
              limits:
                cpu: &quot;2&quot;
                memory: &quot;3Gi&quot;
{{- end }}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This template is intentionally simple but designed to grow with your needs. The &lt;code&gt;githubConfigUrl&lt;/code&gt; can be parameterized to support different users or organizations, not just a single hardcoded owner. You can also expand the values schema to expose per-repo &lt;code&gt;runnerGroup&lt;/code&gt; assignments, custom resource requests and limits, or even machine-shape annotations for workload-aware scheduling. The meta chart starts as a thin wrapper, but it can evolve into a full configuration layer as your requirements get more specific.&lt;/p&gt;
&lt;p&gt;Each generated release points at the official &lt;code&gt;gha-runner-scale-set&lt;/code&gt; chart with consistent configuration:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Docker-in-Docker (dind)&lt;/strong&gt; mode for true container builds&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Resource limits&lt;/strong&gt;: 2 CPU / 3Gi RAM per runner (fits ~2 concurrent runners on the 4c/24GB node)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Scale to zero&lt;/strong&gt;: &lt;code&gt;minRunners: 0&lt;/code&gt; means idle repos spawn no runner pods, so they consume no extra compute beyond the baseline node&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Why individual repo runners?&lt;/strong&gt; ARC supports organization-level runners, which would let you manage a single runner pool for every repo in an organization. The catch: organization-level runners on private repos require a paid GitHub plan. Since most of my repos are private and the whole point of this setup is to avoid recurring costs, paying GitHub for runner access would defeat the purpose. Individual repo runners let me stay on the free plan while still getting self-hosted CI across all repositories. The meta chart is the compromise: a little more YAML, zero subscription fees.&lt;/p&gt;
&lt;h3&gt;Resource Math&lt;/h3&gt;
&lt;p&gt;With 4 Arm cores and 24GB RAM, the sizing works out cleanly. The node comfortably handles 2 concurrent standard runners with headroom for system components:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Component&lt;/th&gt;
&lt;th&gt;CPU&lt;/th&gt;
&lt;th&gt;RAM&lt;/th&gt;
&lt;th&gt;Notes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ARC controller + listeners&lt;/td&gt;
&lt;td&gt;~500m&lt;/td&gt;
&lt;td&gt;~256Mi&lt;/td&gt;
&lt;td&gt;Lightweight control plane&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Standard runner (×2)&lt;/td&gt;
&lt;td&gt;2 each&lt;/td&gt;
&lt;td&gt;3Gi each&lt;/td&gt;
&lt;td&gt;Concurrent job limit&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;System overhead&lt;/td&gt;
&lt;td&gt;~1 core&lt;/td&gt;
&lt;td&gt;~2Gi&lt;/td&gt;
&lt;td&gt;kubelet, Ubuntu, monitoring&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;For bursty workloads, &lt;code&gt;maxRunners: 5&lt;/code&gt; queues additional jobs until capacity frees up.&lt;/p&gt;
&lt;h2&gt;GitOps Everything&lt;/h2&gt;
&lt;p&gt;All manifests are managed by Flux CD. The CI namespace reconciles via a &lt;a href=&quot;https://fluxcd.io/flux/components/source/gitrepositories/&quot;&gt;GitRepository&lt;/a&gt; source and &lt;a href=&quot;https://fluxcd.io/flux/components/kustomize/kustomizations/&quot;&gt;Kustomization&lt;/a&gt; resources.&lt;/p&gt;
&lt;p&gt;The Flux configuration is straightforward:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# flux-system/github-actions.yaml
apiVersion: source.toolkit.fluxcd.io/v1
kind: GitRepository
metadata:
  name: github-actions
  namespace: flux-system
spec:
  interval: 1m
  url: https://github.com/your-org/infra-repo
  ref:
    branch: main
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: github-actions
  namespace: flux-system
spec:
  interval: 10m
  path: ./path/to/manifests
  prune: true
  sourceRef:
    kind: GitRepository
    name: github-actions
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Changes to runner configuration follow the standard GitOps flow:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Edit the meta chart values or repo list&lt;/li&gt;
&lt;li&gt;Commit and push&lt;/li&gt;
&lt;li&gt;Flux detects the change within 1 minute&lt;/li&gt;
&lt;li&gt;ARC controller creates/updates/deletes runner scale sets automatically&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;No &lt;code&gt;kubectl apply&lt;/code&gt;, no drift, no &quot;works on my machine.&quot;&lt;/p&gt;
&lt;h2&gt;What Works Well&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Zero-cost compute&lt;/strong&gt;: The Oracle free tier is genuinely usable for this workload&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Scale-to-zero&lt;/strong&gt;: Repos with infrequent activity consume no resources&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Meta chart&lt;/strong&gt;: Adding a new repo is one line of YAML&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Flux + ARC&lt;/strong&gt;: GitOps-native runner management with automatic updates&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Renovate on self-hosted runners&lt;/strong&gt;: Dependency updates (including the ARC chart itself) run daily on this same fleet, using itself as the execution environment&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Arm performance&lt;/strong&gt;: Node.js and Rust builds are surprisingly fast on Ampere&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;What&apos;s Tricky&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Arm compatibility&lt;/strong&gt;: Not all images have Arm variants. I maintain a few custom runners with pre-installed tools.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Single node&lt;/strong&gt;: No high availability. If the OCI instance goes down, CI stops until it recovers.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Disk pressure&lt;/strong&gt;: 150GB fills up fast with Docker layers and build cache. Automated cleanup is essential.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;GitHub API limits&lt;/strong&gt;: ARC polls GitHub for jobs. With many repos, you can hit rate limits; the meta chart helps by centralizing configuration but doesn&apos;t reduce API calls.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;The Migration Reality Check&lt;/h3&gt;
&lt;p&gt;I already had a free tier Oracle instance running when I decided to add it to CloudFleet. It hosted Uptime Kuma, Cloudflare Tunnel, and Tailscale. I used it as a reverse proxy for various services. The plan was simple: reuse the existing machine, install the CloudFleet agent, and start scheduling CI workloads alongside my existing services.&lt;/p&gt;
&lt;p&gt;It did not work. CloudFleet apparently runs its own Tailscale mesh internally for node connectivity and management. My existing Tailscale installation conflicted with theirs. Rather than debug the networking overlap, I decided to separate concerns completely.&lt;/p&gt;
&lt;p&gt;I migrated the existing services off the machine, destroyed the original instance, and provisioned a fresh Oracle VM dedicated solely to CloudFleet workloads. CloudFleet connected cleanly to the new machine and began scheduling pods immediately.&lt;/p&gt;
&lt;p&gt;The lesson: if you already run networking overlays or VPNs on a machine you want to add to CloudFleet, expect conflicts. It is cleaner to start fresh with a dedicated node.&lt;/p&gt;
&lt;p&gt;This also reinforced that this is a &lt;strong&gt;self-managed node&lt;/strong&gt;. CloudFleet handles the Kubernetes control plane and scheduling, but I am still responsible for the VM itself. OS updates, security patches, and disk cleanup are on me. The node does not manage itself.&lt;/p&gt;
&lt;h2&gt;Security on Self-Hosted Runners&lt;/h2&gt;
&lt;p&gt;Self-hosting CI shifts the security model entirely onto you. GitHub-hosted runners are ephemeral, isolated, and discarded after each job. Your infrastructure is none of those things by default.&lt;/p&gt;
&lt;h3&gt;The Privilege Problem&lt;/h3&gt;
&lt;p&gt;ARC lets you replicate the official GitHub-hosted runner environment closely, including the ability to run privileged containers and access host resources. This is powerful for Docker-in-Docker builds, but it is also dangerous. A compromised runner with &lt;code&gt;privileged: true&lt;/code&gt; can escape its container, access the host node, and potentially pivot to other namespaces or connected repositories. Do not mount host paths or secrets into runner pods unless absolutely necessary.&lt;/p&gt;
&lt;h3&gt;Containment and Trust Boundaries&lt;/h3&gt;
&lt;p&gt;Even within the same CloudFleet-managed cluster, CI workloads run in dedicated namespaces (&lt;code&gt;arc-systems&lt;/code&gt;, &lt;code&gt;arc-runners&lt;/code&gt;) with network policies and RBAC restrictions. CI workloads are inherently untrusted: they execute arbitrary code from pull requests, and a malicious or compromised dependency can exfiltrate secrets, modify source code, or attack the build environment. On self-hosted runners, that attack surface includes your node and potentially your cluster.&lt;/p&gt;
&lt;p&gt;Namespace isolation helps contain the blast radius, but it is not a guarantee. I layer several mitigations on top:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Network policies&lt;/strong&gt; blocking egress from runner pods to internal services&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;No persistent volumes&lt;/strong&gt; mounted into runner pods&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Regular node rotation&lt;/strong&gt; via VM recreation&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Separate GitHub tokens&lt;/strong&gt; with minimal scope for ARC registration&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;RBAC restrictions&lt;/strong&gt; and the principle of least privilege&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Scaling Beyond the Free Tier&lt;/h2&gt;
&lt;p&gt;The Oracle free tier handles my current workload, but it is a single point of failure. When the time comes to scale, CloudFleet supports attaching nodes from multiple cloud providers to the same cluster. You can add Hetzner Cloud or EC2 instances as additional worker nodes without changing your GitOps configuration. CloudFleet handles the lifecycle of these machines, creating and deleting them as needed.&lt;/p&gt;
&lt;p&gt;CloudFleet organizes additional capacity into &lt;strong&gt;fleets&lt;/strong&gt; (similar to scaling groups), which define the maximum compute capacity you want available. Nodes within a fleet can have different machine shapes, from a budget CX23 (2 vCPU, 4GB RAM) to a larger CX33 (4 vCPU, 8GB RAM) or dedicated CPU instances. The fleet acts as a ceiling on total compute, not a uniform node pool. This means you can go from a single free Oracle node to a multi-cloud, multi-node CI fleet without changing your manifests. The cluster simply gains more capacity to schedule pods.&lt;/p&gt;
&lt;p&gt;Note that CloudFleet&apos;s free tier caps total compute at &lt;strong&gt;24 vCPU&lt;/strong&gt;. The Oracle Ampere A1 instance already consumes 4 of those, leaving &lt;strong&gt;20 vCPU&lt;/strong&gt; of headroom within the free tier before you need to upgrade.&lt;/p&gt;
&lt;h3&gt;The Cost Math: GitHub Actions vs. Hetzner&lt;/h3&gt;
&lt;p&gt;Right now, the entire CI fleet runs on a single free Oracle node. The question is: when does it make financial sense to add paid capacity?&lt;/p&gt;
&lt;p&gt;Compare a Hetzner CX33 (4 vCPU, 8GB, ~$7.99/month after the April 2026 price increase) to a GitHub-hosted Linux 2-core runner at $0.36/hour. The break-even is roughly &lt;strong&gt;22 hours of CI per month&lt;/strong&gt;. If your repos burn through the included 2,000 GitHub minutes (33 hours) and you need more runtime, a Hetzner node becomes cheaper.&lt;/p&gt;
&lt;p&gt;And unlike GitHub&apos;s per-minute billing, the Hetzner node is yours 24/7. You can run multiple concurrent jobs, keep build caches warm, and schedule Renovate or other automation without worrying about metered costs.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;One catch&lt;/strong&gt;: Hetzner bills in full-hour increments. If you spin up a machine for 10 minutes, you pay for the full hour. For cost effectiveness, you should recycle machines rather than spin up fresh ones for short jobs. This makes Hetzner ideal for a persistent runner pool (like ARC&apos;s scale sets) rather than truly ephemeral per-job instances.&lt;/p&gt;
&lt;h3&gt;Workload-Aware Scheduling&lt;/h3&gt;
&lt;p&gt;CloudFleet doesn&apos;t just add generic capacity. You can annotate workloads to request specific machine shapes. A lightweight microservice might get a CX23, while a heavy integration test suite gets a CX33 or dedicated CPU instance. If you don&apos;t specify an annotation, CloudFleet falls back to your resource requests and limits to pick an appropriate shape. If neither is provided, it chooses the cheapest option. This means you don&apos;t have to manually manage node pools or taints. Just declare what you need, and CloudFleet provisions the right hardware.&lt;/p&gt;
&lt;h3&gt;The Hybrid Workload Split&lt;/h3&gt;
&lt;p&gt;The real architectural win is splitting persistent baseline services from bursty CI workloads. The Oracle node runs 24/7 and hosts the always-on infrastructure:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;ARC controller and listeners&lt;/strong&gt; (must poll GitHub continuously for job events)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Test databases and caches&lt;/strong&gt; (PostgreSQL, Redis, or MinIO for integration tests)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Build artifact caches&lt;/strong&gt; (sccache or registry proxies to speed up repeat builds)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Flux CD and monitoring&lt;/strong&gt; (the GitOps agents and lightweight Prometheus stack)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These are lightweight, persistent services that cost nothing extra on the Oracle node but would burn through metered GitHub minutes if they ran on hosted runners.&lt;/p&gt;
&lt;p&gt;When a heavy CI job fires (e.g., a full Rust release build, Docker image compilation, or parallel test matrix), CloudFleet can spin up Hetzner nodes to handle the burst. The job schedules on the fresh capacity, runs, and the node can be recycled. You get a free baseline with elastic burst capacity, rather than paying for compute that sits idle 90% of the time.&lt;/p&gt;
&lt;h3&gt;The Single Node Risk&lt;/h3&gt;
&lt;p&gt;The Oracle free tier is genuinely free, but it&apos;s a single node. If it goes down, your persistent services and baseline CI capacity disappear. Adding a paid Hetzner node pool to the same CloudFleet cluster provides redundancy and scaling without giving up the free Oracle node. The cluster can tolerate losing either provider and still schedule workloads. This is the difference between a toy homelab and a production-ready setup.&lt;/p&gt;
&lt;h2&gt;The Config&lt;/h2&gt;
&lt;p&gt;If you want to replicate this setup, the key Flux CD and ARC resources are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Flux HelmRelease docs&lt;/strong&gt;: https://fluxcd.io/flux/components/helm/helmreleases/&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ARC Helm chart&lt;/strong&gt;: https://github.com/actions/actions-runner-controller/tree/master/charts&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ARC documentation&lt;/strong&gt;: https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Meta chart pattern&lt;/strong&gt;: A wrapper Helm chart that loops over a repo list and generates individual &lt;code&gt;HelmRelease&lt;/code&gt; resources for each repository&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Self-hosting CI doesn&apos;t have to mean babysitting Jenkins. With &lt;a href=&quot;https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller&quot;&gt;GitHub ARC&lt;/a&gt;, &lt;a href=&quot;https://fluxcd.io&quot;&gt;Flux CD&lt;/a&gt;, and a small custom chart, I get elastic, GitOps-managed runners on free hardware.&lt;/p&gt;
&lt;p&gt;If you&apos;re running Oracle&apos;s free tier and want CI that scales to zero, this pattern works.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;em&gt;Hardware: &lt;a href=&quot;https://www.oracle.com/cloud/free/&quot;&gt;Oracle Cloud&lt;/a&gt; Ampere A1 (4c/24GB/150GB) | Control Plane: &lt;a href=&quot;https://cloudfleet.ai&quot;&gt;CloudFleet&lt;/a&gt; (free tier) | OS: Ubuntu LTS | GitOps: &lt;a href=&quot;https://fluxcd.io&quot;&gt;Flux CD&lt;/a&gt; | CI: GitHub Actions (&lt;a href=&quot;https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller&quot;&gt;ARC&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
</content:encoded></item><item><title>Robust Scripts in Azure Pipelines</title><link>https://oliverconzen.de/blog/robust-scripts-in-azure-pipelines/</link><guid isPermaLink="true">https://oliverconzen.de/blog/robust-scripts-in-azure-pipelines/</guid><description>Apply bash strict mode to Azure Pipelines scripts. Stop silent failures and build more robust, reliable CI/CD automation.</description><pubDate>Fri, 19 Jul 2024 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;When developing software, CI/CD pipelines are often used to unit test the code, perform static analysis, deploy artifacts to staging environments, or roll out features to users. Scripts are frequently used to glue programs together.&lt;/p&gt;
&lt;h2&gt;How Azure Pipelines Scripts Work&lt;/h2&gt;
&lt;p&gt;Azure Pipelines, part of the Azure DevOps suite of tools, is a cloud-based service that automates the building, testing, and deployment of code projects. It supports continuous integration (CI) and continuous delivery (CD), allowing developers to automate their workflows efficiently. Azure Pipelines can run scripts using different interpreters depending on the platform. For this article, we focus on bash as used for the Linux (Ubuntu) and macOS runners.&lt;/p&gt;
&lt;p&gt;A typical Azure Pipeline defined in YAML might look like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;pool:
  vmImage: &apos;ubuntu-latest&apos;

steps:
- script: |
    echo &quot;Hello, World!&quot;
    cp ./somefile somewhere
    cat somewhere/somefile | grep things
  displayName: &apos;Run custom script&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In this example, the script step runs a simple illustrative bash script on an Ubuntu agent. Azure Pipelines supports various properties for scripts, such as setting the working directory, environment variables, and handling errors. This is not dissimilar to other CI/CD environments and providers, such as GitLab CI/CD and GitHub Actions.&lt;/p&gt;
&lt;p&gt;Behind the scenes, the script step creates a bash script with the script contents and executes them. Each line in the script is executed individually and in succession, meaning that exit codes are not checked by default.&lt;/p&gt;
&lt;p&gt;Even when these script blocks are only used as glue, it can be quite challenging to debug, when a build fails or the pipeline in general fails. I noticed, personally, that errors were not caught early on but silently ignored, only to creep up later in a build and lead to errors. This can be the case, for example, when zipping files and extracting or deploying them later on. What happens if a file is missing, an environment variable is not set?&lt;/p&gt;
&lt;h2&gt;Unofficial Bash Strict Mode&lt;/h2&gt;
&lt;p&gt;Using the unofficial bash strict mode in your scripts within Azure Pipelines, or really any script, can significantly enhance the robustness and reliability of your automation processes.&lt;/p&gt;
&lt;p&gt;But what is the unofficial bash strict mode?&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;set -euxo pipefail
IFS=$&apos;\n\t&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Unofficial bash strict mode in two lines&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Here&apos;s a breakdown of what each of these settings does:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;code&gt;set -e&lt;/code&gt;&lt;/strong&gt;: This option causes the script to exit immediately if any command exits with a non-zero status. This prevents the script from continuing to execute commands after an error has occurred, which could lead to unexpected behaviour or further errors.
Keep in mind that simple operations like &lt;code&gt;variable++&lt;/code&gt; might not have an exit code of 0 and stop execution of your script.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;code&gt;set -u&lt;/code&gt;&lt;/strong&gt;: This option treats unset variables as an error and exits immediately. This helps catch typos and other mistakes where a variable is referenced before being set.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;code&gt;set -x&lt;/code&gt;&lt;/strong&gt;: Print each command and its arguments to standard error as they are executed, which helps in debugging. Keep in mind that combined with the azure pipelines flag to exit on stderr, this might lead to false positives in error detection in your pipeline. It might also spam your logs with more information than you expected when looping in your script.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;code&gt;set -o pipefail&lt;/code&gt;&lt;/strong&gt;: This option ensures that the return value of a pipeline is the status of the last command to exit with a non-zero status, or zero if no command exited with a non-zero status. This prevents errors in a pipeline from being masked.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;code&gt;IFS=$&apos;\n\t&apos;&lt;/code&gt;&lt;/strong&gt;: This sets the Internal Field Separator (IFS) to newline and tab only, which helps avoid issues with word splitting on spaces. The default IFS includes space, tab, and newline, which can lead to unexpected splitting of strings. Keep in mind that you don&apos;t need this flag if you never split strings. If you do, have a look at your data and figure out the best field separators for you.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Always remember that you can choose any of the switches to set in combination. You can leave out parts that might cause more trouble than they might help. Don&apos;t blindly copy the two lines into your pipelines and scripts, but understand the implications of them.
For general usage, the two lines above are still a good fit and prevent most errors.&lt;/p&gt;
&lt;h3&gt;Combining Unofficial Bash Strict Mode with Azure Pipelines&lt;/h3&gt;
&lt;p&gt;To integrate the unofficial bash strict mode into your Azure Pipelines scripts, you can include the necessary settings at the beginning of your bash scripts. Here&apos;s an example of how to do this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;pool:
  vmImage: &apos;ubuntu-latest&apos;

steps:
- script: |
    set -euxo pipefail
    IFS=$&apos;\n\t&apos;

    # Your script commands here
    echo &quot;Running in strict mode&quot;
  displayName: &apos;Run script with unofficial bash strict mode&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Reference the Script in Your Azure Pipeline YAML&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;That&apos;s all Folks!&lt;/h2&gt;
&lt;p&gt;Integrating the unofficial bash strict mode into your Azure Pipelines scripts can lead to more robust, reliable, and maintainable automation. By ensuring that errors are caught early and providing detailed debugging information, you can streamline your CI/CD processes and reduce the likelihood of subtle bugs causing issues in your deployments.&lt;/p&gt;
&lt;p&gt;As always, it is worth understanding what each of the toggles and knobs do and choosing what you want to implement and why. The unofficial bash strict mode isn&apos;t a wonder weapon, but a nice little toolset from which you can pick from.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/thats-all-folks.gif&quot; alt=&quot;That&apos;s all Folks!&quot; /&gt;&lt;/p&gt;
</content:encoded></item><item><title>Leveraging Tailscale Docker Mod - Simplified Networking and Secure Application Hosting</title><link>https://oliverconzen.de/blog/tailscale-docker-mod/</link><guid isPermaLink="true">https://oliverconzen.de/blog/tailscale-docker-mod/</guid><description>Connect linuxserver.io containers to your Tailscale tailnet. Simplify secure networking and self-hosted application access with WireGuard.</description><pubDate>Tue, 01 Aug 2023 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Combining the ease of Docker with the security of Tailscale in a universal docker mod offers exciting possibilities. This blog post explores how the Tailscale Docker mod enables effortless access to remote applications while maintaining robust security measures.&lt;/p&gt;
&lt;p&gt;Docker has revolutionized the way we deploy applications, making containerization an essential part of modern software development, and Tailscale has emerged as a powerful tool for creating secure networks and establishing encrypted connections between devices.&lt;/p&gt;
&lt;p&gt;I&apos;ve been considering adding more Tailscale nodes to my network for a while now. I had been running a few self-hosted applications behind Cloudflare tunnels, but I wasn&apos;t satisfied with the outcome. Even though traffic was securely being tunnelled through Cloudflare, my apps were still accessible from the open internet. It would be ideal to only expose the parts of the network that should be public, like a website or service that you host for others. What are your thoughts on using Tailscale to connect to your apps instead?&lt;/p&gt;
&lt;h2&gt;What is the Tailscale Docker Mod?&lt;/h2&gt;
&lt;p&gt;The Tailscale Docker mod is a universal modification for images provided by &lt;strong&gt;linuxserver.io&lt;/strong&gt;, a community of image maintainers and a popular repository for Docker images.
By incorporating this mod into a container, you can harness the capabilities of Tailscale within your dockerized applications. The mod automatically installs the Tailscale daemon into the container and manages its lifecycle. The container is then accessible as a node on your Tailnet. This can, for example, allow multiple containers, running on different servers or virtual machines, to seamlessly communicate over a secure WireGuard tunnel without the need for manual network management. Another use case would be to only allow specific Tailscale users to connect to the node using ACLs. This improves the security of the application, since the application is only accessible by a limited number of people over this specific connection. Thus, you get an additional factor of security, even before a user can log into an application.&lt;/p&gt;
&lt;h2&gt;Simplified Network Communication&lt;/h2&gt;
&lt;p&gt;Let&apos;s illustrate the advantages of the Tailscale Docker mod with an example. Imagine you have two servers – one hosting a database and the other running an application. By implementing the Tailscale Docker mod in both containers, you can establish a secure and encrypted connection between these servers using WireGuard. This means that the database server can securely transfer data to the application server over the tailnet, with minimal configuration required.&lt;/p&gt;
&lt;h2&gt;Anywhere Access via your Tailscale tailnet&lt;/h2&gt;
&lt;p&gt;The Tailscale Docker mod introduces the concept of the &lt;em&gt;tailnet&lt;/em&gt;, which refers to the network spanned by Tailscale hosts. This network ensures that all containers running the mod, as well as users&apos; machines and other servers running the tailscale daemon, can communicate with each other, irrespective of their physical locations. Therefore, regardless of whether your servers are in the same data center or spread across the globe, they can seamlessly exchange information through the secure Tailscale tunnel.&lt;/p&gt;
&lt;h2&gt;TLS Certificates and Tailscale Serve&lt;/h2&gt;
&lt;p&gt;One of the noteworthy benefits of combining Tailscale with Docker is the use of Tailscale serve, a reverse proxy built into the daemon that will be running on all machines. This feature allows you to serve applications with a Let&apos;s Encrypt TLS certificate effortlessly. With tailscaled handling the certificate management, you can focus on other things without the hassle of manual certificate generation or setting up an acme client yourself. Since many dockerized applications expect to be reverse proxied anyway and don&apos;t expect to handle TLS themselves any more, this worked seamlessly in my tests. Gitea, nextcloud et al. run great behind Tailscale.&lt;/p&gt;
&lt;h2&gt;Secure Self-Hosting Made Simple&lt;/h2&gt;
&lt;p&gt;Tailscale Docker mod offers a fantastic solution for self-hosting applications, granting access only to specific users, teams, or family members. By leveraging Tailscale&apos;s end-to-end encryption, you can ensure that your applications remain accessible only to the people you specifically chose. Users accessing the application through a browser won&apos;t be aware of the underlying Tailscale encryption. Keep in mind, though, that the encryption provided by Tailscale itself, namely the WireGuard VPN which is the base of the network, merely acts as another layer of encryption. I still recommend that you host the application over TLS in addition to guard against a wide variety of attacks.&lt;/p&gt;
&lt;h2&gt;Taking it a Step Further with Tailscale Funnel&lt;/h2&gt;
&lt;p&gt;I know, I know, sometime you want to share an app with a wider audience. For those seeking to make their applications accessible to the broader internet, tailscale funnel comes into play. Tailscale funnel enables you to make your applications reachable on the internet through a subdomain under a vanity URL. Though the solution is convenient and secure, it has one limitation compared to using &lt;code&gt;cloudflared&lt;/code&gt; and the Cloudflare dashboard. With Tailscale funnel, you can only pick a subdomain under a few options, e.g., &lt;code&gt;myapp.vanity-url.ts.net&lt;/code&gt; or &lt;code&gt;otherapp.awesome-sauce.ts.net&lt;/code&gt;, whereas Cloudflare allows you to host applications under a domain name that you control fully. Another upside is, of course, that you do not need to buy a domain, the subdomain comes for free with MagicDNS and Tailscale.
Tailscale also does not offer SSO integration for the applications. You will still need to set this up in another way if you need it.&lt;/p&gt;
&lt;h2&gt;Useful for development&lt;/h2&gt;
&lt;p&gt;With Tailscale you can mimic a powerful feature that you get for example for Cloudflare pages or from other integrated hosting platforms: Ephemeral testing sites. With Tailscale, you can host a container publicly or privately for a short amount of time to test out features or check out the result of a pull request. The following two files are part of an example static single page application and how it will be started with compose.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# syntax=docker/dockerfile:1
FROM node:18 AS build


COPY . /app
WORKDIR /app
RUN &amp;lt;&amp;lt;EOF
    npm ci
    npm run build
EOF

FROM lscr.io/linuxserver/nginx:latest AS host
COPY --from=build /app/dist /config/www
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;version: &apos;3.7&apos;
services:
  app:
    build: .
    environment:
      - TAILSCALE_AUTHKEY=tskey-auth-you-thought-i-would-give-you-a-key-l0l
      - TAILSCALE_USE_SSH=0
      - TAILSCALE_STATE_DIR=/var/lib/tailscale
      - TAILSCALE_SERVE_PORT=80
      - TAILSCALE_SERVE_MODE=https
      - TAILSCALE_HOSTNAME=${RANDOM_HOST_NAME}
      - DOCKER_MODS=ghcr.io/tailscale-dev/docker-mod:main
    volumes:
      - tailscale:/var/lib/tailscale
volumes:
  tailscale:
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Since our hosting image is based on linuxserver&apos;s nginx image, we can simply add the Tailscale mod and provide the environment variables to plumb it into our Tailnet. Using, for example, environment variables and a random name generator, you could then conceivably copy the aforementioned feature of ephemerally hosted websites.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In conclusion, the Tailscale Docker mod provides a seamless way to integrate the benefits of Docker and Tailscale, offering simplified networking and secure application hosting for your containers. By effortlessly creating secure WireGuard tunnels and utilizing Tailscale Serve for TLS certificates, you can enhance the privacy and security of your applications while maintaining ease of access for authorized users. While Tailscale Funnel opens up the possibility of making your applications accessible to the internet, it comes with certain domain limitations. Nonetheless, the Tailscale Docker mod remains an excellent choice for developers and system administrators looking to streamline network management and bolster their application security.&lt;/p&gt;
</content:encoded></item></channel></rss>