Running a Full Homelab on a Mac Mini M4 — No Proxmox, No Rack, Just Docker
Most homelab guides assume you’re running Proxmox on a beefy x86 tower or a rack-mounted server. But what if your entire homelab fits on your desk, sips power, and runs silently? That’s exactly what you can build with an Apple Silicon Mac Mini.
This post walks through my complete setup: a Mac Mini M4 running Docker containers via Colima, with external storage over Thunderbolt and USB, NAS mounts for media, a local reverse proxy with automatic TLS, a dashboard, remote access, and offsite backups to Backblaze. Everything is reproducible and config-driven.
Why a Mac Mini?
A few reasons made the Mac Mini M4 the right fit:
- Power efficiency — Apple Silicon idles at a fraction of the wattage of a typical homelab server. Running 24/7 costs almost nothing on the electricity bill.
- Silent operation — No fans spinning under normal container workloads. It sits in a living room without anyone noticing.
- ARM-native Docker — Most popular container images now ship
linux/arm64variants. For the few that don’t, Rosetta handles x86 emulation transparently. - Thunderbolt 4 & USB — High-speed external storage is plug-and-play. No need for a NAS chassis or SATA backplane.
- macOS stability — Say what you will about macOS for servers, but with launchd auto-start and Colima, it’s been rock-solid.
The Storage Architecture
One of the more interesting aspects of this setup is how storage is handled entirely through external volumes.
Thunderbolt & USB Drives
The Mac Mini has multiple Thunderbolt 4 ports and USB ports. I use a combination of:
- A primary external drive mounted at
/Volumes/ExternalHomevia Thunderbolt — this holds the entire homelab directory, including all Docker Compose configs, Colima VM data, and persistent container volumes. Thunderbolt gives near-internal-SSD speeds, so there’s no performance penalty for running everything off an external disk. - A NAS-mounted volume at
/Volumes/Immich— this is an SMB/NFS share from a TrueNAS box on the local network, used specifically for photo and video storage (Immich upload data). It’s mounted via macOS’s built-in “MountNASVolumes” automation so it reconnects on login.
Both of these paths are passed into the Colima VM as virtiofs mounts, which means containers see them as local filesystem paths with near-native I/O performance.
Backblaze B2 Offsite Backup
Having local storage is great, but a real homelab needs offsite backup. The external drives are backed up to Backblaze B2 cloud storage. Backblaze offers an incredibly cost-effective solution for bulk storage:
- The Thunderbolt drive with all homelab data and configs gets regular incremental backups
- Photo/video libraries are synced to B2 buckets
- Backblaze’s native Mac client or tools like
rclonecan handle the sync, running on a schedule
This gives you the classic 3-2-1 backup rule: three copies of data, on two different media types, with one offsite. Local drives for speed, NAS for redundancy, Backblaze for disaster recovery.
The Container Runtime: Colima
Since Docker Desktop on macOS is no longer free for larger teams and can be resource-heavy, I use Colima — a lightweight container runtime for macOS that wraps Lima VMs.
Colima Configuration
The VM is configured with reasonable resources for a homelab:
# colima.yaml
cpu: 6
memory: 12 # GiB
disk: 100 # GiB
arch: aarch64
runtime: docker
vmType: vz # Apple Virtualization framework
mountType: virtiofs # Near-native filesystem performance
rosetta: true # x86 emulation for amd64 images
binfmt: true # Foreign architecture support
Key decisions here:
vmType: vz— Uses Apple’s native Virtualization framework instead of QEMU. This is significantly faster and more resource-efficient on Apple Silicon.mountType: virtiofs— The fastest mount option available withvz. Docker volumes backed by external drives perform nearly as well as native disk.rosetta: true— Enables transparent x86_64 emulation via Apple’s Rosetta. Any container image that only shipslinux/amd64will just work, with a modest performance overhead.- 6 CPU / 12 GB RAM — Leaves headroom for macOS itself while giving containers plenty of resources.
Volume Mounts
The Colima VM mounts both external volumes into the Linux VM:
mounts:
- location: /Volumes/ExternalHome
writable: true
- location: /Volumes/Immich
writable: true
This is what makes the whole “external drive as homelab storage” approach work. Docker containers bind-mount paths like ./volumes/immich_postgres_data:/var/lib/postgresql, and because the entire homelab directory lives on the Thunderbolt drive, all persistent data is on fast external storage — not the Mac’s internal SSD.
Where Colima Data Lives
An important detail: the COLIMA_HOME environment variable points to the homelab directory on the external drive:
export COLIMA_HOME="/Volumes/ExternalHome/Homelab/colima"
export DOCKER_HOST="unix://$COLIMA_HOME/default/docker.sock"
This means the VM disk image, Docker socket, and all Colima state live on the external drive. If you ever need to move your homelab to a different Mac, you plug in the drive and you’re done.
Auto-Start on Boot with launchd
A homelab should survive reboots without manual intervention. On macOS, the way to do this is with a launchd agent.
A plist file at ~/Library/LaunchAgents/com.homelab.colima.plist triggers a startup script on login. The script handles the tricky part — waiting for the external drive to mount before starting Colima:
#!/bin/bash
LOG="/tmp/colima-autostart.log"
HOMELAB_DIR="/Volumes/ExternalHome/Homelab"
export COLIMA_HOME="$HOMELAB_DIR/colima"
export DOCKER_HOST="unix://$COLIMA_HOME/default/docker.sock"
export PATH="/opt/homebrew/bin:$PATH"
# Wait up to 120 seconds for external drive
TRIES=0
while [ ! -d "/Volumes/ExternalHome" ] && [ $TRIES -lt 24 ]; do
sleep 5
TRIES=$((TRIES + 1))
done
if [ ! -d "/Volumes/ExternalHome" ]; then
echo "$(date): ERROR - External drive not mounted after 120s" >> "$LOG"
exit 1
fi
colima start >> "$LOG" 2>&1
cd "$HOMELAB_DIR"
docker compose up -d >> "$LOG" 2>&1
The 120-second polling loop is critical — Thunderbolt drives on macOS can take a variable amount of time to appear at their mount point after login, especially if FileVault is enabled or the drive needs to spin up.
Services: What’s Running
Everything is defined in a single docker-compose.yml file. All services share one Docker bridge network called homelab.
Caddy — Reverse Proxy with Automatic Internal TLS
caddy:
image: caddy:2-alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./caddy/Caddyfile:/etc/caddy/Caddyfile:ro
- ./volumes/caddy_data:/data
- ./volumes/caddy_config:/config
Caddy serves as the reverse proxy for all services. The killer feature for a homelab is tls internal — Caddy runs its own Certificate Authority and automatically generates trusted TLS certificates for local domains. No Let’s Encrypt, no self-signed cert warnings, no port numbers to remember.
The Caddyfile is dead simple:
homepage.home.us {
tls internal
reverse_proxy homepage:3000
}
photos.home.us {
tls internal
reverse_proxy immich-server:2283
}
portainer.home.us {
tls internal
reverse_proxy portainer:9000
}
To make this work, you:
- Point
*.home.usto the Mac Mini’s IP in your local DNS (I use AdGuard Home) - Install Caddy’s root CA certificate on your client devices (it’s generated at
caddy/caddy-root-ca.crt)
After that, every service gets a clean https://service.home.us URL with a green padlock.
Immich — Photo & Video Management
Immich is the centerpiece of this homelab — a self-hosted Google Photos alternative that supports AI-powered search, facial recognition, and automatic organization.
The Immich stack consists of four containers:
# Main server — handles the web UI and API
immich-server:
image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
ports:
- "2283:2283"
volumes:
- ${UPLOAD_LOCATION}:/data # Points to NAS mount
# Machine learning sidecar — CLIP embeddings, facial recognition
immich-machine-learning:
image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}
volumes:
- ./volumes/immich_model_cache:/cache
# Redis-compatible cache
immich-redis:
image: docker.io/valkey/valkey:9-alpine
# PostgreSQL with pgvector for AI search
immich-database:
image: ghcr.io/immich-app/postgres:18-vectorchord0.5.3-pgvector0.8.1
volumes:
- ./volumes/immich_postgres_data:/var/lib/postgresql
A few things to note:
- Photo storage lives on the NAS (
/Volumes/Immich), not the local drive. This keeps the large media library on bulk storage while the database and ML cache stay on fast Thunderbolt storage. - PostgreSQL uses pgvector — this enables the AI-powered semantic search feature where you can search your photos by description (e.g., “beach sunset”).
- Valkey is used instead of Redis — it’s a fully compatible, community-maintained fork.
- The ML container downloads and caches CLIP models on first run. Expect ~2-3 GB of model data.
Homepage — Dashboard
Homepage provides a clean dashboard that aggregates all services in one place:
homepage:
image: ghcr.io/gethomepage/homepage:latest
volumes:
- ./homepage/config:/app/config
- /var/run/docker.sock:/var/run/docker.sock:ro
It reads from the Docker socket to show container status and integrates with service APIs (like Immich and Portainer) to display live widgets with stats. The dashboard is organized into sections — Media services in one row, Infrastructure in another — with a dark theme and resource monitoring (CPU, RAM, disk).
Portainer — Docker Management GUI
portainer:
image: portainer/portainer-ce:lts
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./volumes/portainer_data:/data
Portainer gives you a web UI for managing containers, viewing logs, and restarting services without touching the command line. The LTS version is stable and gets security updates. It’s not strictly necessary if you’re comfortable with docker compose commands, but it’s handy for quick checks from a phone or tablet.
Twingate — Remote Access VPN
twingate:
image: twingate/connector:1
sysctls:
- net.ipv4.ping_group_range=0 2147483647
Instead of exposing services to the internet or running a traditional VPN like WireGuard, I use Twingate. It’s a zero-trust network access solution that:
- Requires no open inbound ports
- Works behind NAT/CGNAT
- Provides per-service access controls
- Has native clients for macOS, iOS, Android, Windows, and Linux
The connector container establishes an outbound connection to Twingate’s relay network. From there, authenticated devices on the Twingate network can access homelab services as if they were on the local network. This means I can access photos.home.us from my phone over cellular without any port forwarding.
The Network
DNS with AdGuard Home
Two AdGuard Home instances run on separate devices on the network, providing:
- Local DNS resolution —
*.home.usresolves to the Mac Mini’s local IP - Ad blocking — network-wide ad and tracker blocking for all devices
- DNS-over-HTTPS/TLS — encrypted DNS queries
Having two DNS servers ensures that DNS stays available even if one goes down for maintenance.
NAS Storage
The broader network includes multiple TrueNAS boxes serving different roles:
- Backup NAS — Primary network storage for backups and bulk data
- Fun NAS — Runs additional services like a browser-based Firefox instance and QBittorrent
- Trial NAS — Used for testing TrueNAS configurations before deploying to production
Directory Structure
Here’s how the homelab directory is organized on the external drive:
/Volumes/ExternalHome/Homelab/
├── docker-compose.yml # All service definitions
├── .env # Immich credentials and config
├── start.sh # Manual startup script
├── colima-autostart.sh # launchd auto-start script
├── caddy/
│ ├── Caddyfile # Reverse proxy routes
│ └── caddy-root-ca.crt # Local CA cert (install on clients)
├── homepage/
│ └── config/ # Dashboard configuration
│ ├── services.yaml # Service definitions and widgets
│ ├── settings.yaml # Theme and layout
│ └── widgets.yaml # System resource widgets
├── volumes/ # Persistent container data
│ ├── caddy_data/
│ ├── caddy_config/
│ ├── immich_postgres_data/
│ ├── immich_model_cache/
│ └── portainer_data/
├── colima/ # Colima VM data and docker socket
│ └── default/
│ ├── colima.yaml # VM configuration
│ └── docker.sock # Docker socket
└── backups/ # Database backups
Adding a New Service
The process is always the same:
- Add the service to
docker-compose.ymlon thehomelabnetwork:
myservice:
image: someimage:latest
container_name: myservice
restart: unless-stopped
volumes:
- ./volumes/myservice_data:/data
networks:
- homelab
- Add a reverse proxy entry in
caddy/Caddyfile:
myservice.home.us {
tls internal
reverse_proxy myservice:<port>
}
-
Add a DNS record for
myservice.home.uspointing to the Mac Mini (or use a wildcard*.home.usrecord). -
Optionally add it to the dashboard in
homepage/config/services.yaml. -
Deploy:
docker compose up -d
That’s it. The new service gets automatic TLS, a clean URL, and shows up on the dashboard.
Troubleshooting Tips
Colima Won’t Start
The most common issue after an unclean shutdown (power loss, force reboot) is a stale disk lock:
# Error: "failed to run attach disk "colima", in use by instance "colima""
# Fix: Remove the stale lock
rm -f colima/_lima/_disks/colima/in_use_by
# Then retry
COLIMA_HOME=/Volumes/ExternalHome/Homelab/colima colima start
Also check that both /Volumes/ExternalHome and /Volumes/Immich are mounted — Colima’s VM config includes both as virtiofs mounts, and it will fail to start if either path doesn’t exist.
Container Can’t Access NAS Volume
If Immich reports upload errors, verify the NAS mount is available:
ls /Volumes/Immich
If it’s not mounted, re-run the MountNASVolumes automation or manually mount it.
Check Logs
# Colima auto-start log
cat /tmp/colima-autostart.log
# Lima VM stderr (the real error when Colima shows generic "exit status 1")
cat colima/_lima/colima/ha.stderr.log
# Container logs
docker logs immich_server
docker logs caddy
Cost Breakdown
One of the best things about a Mac Mini homelab is the running cost:
| Item | Cost |
|---|---|
| Mac Mini M4 | One-time purchase |
| External Thunderbolt SSD | One-time purchase |
| Electricity (~5-15W idle) | ~$1-3/month |
| Backblaze B2 storage | $6/TB/month |
| Twingate (free tier) | $0/month |
| Domain name (optional) | $0 (using local .home.us) |
Compare that to a typical homelab server drawing 100-300W and the Mac Mini pays for itself in electricity savings within a year or two.
What I’d Do Differently
- Move secrets out of
docker-compose.yml— Twingate tokens and any other credentials should live in.envfiles or a proper secrets manager, not inline in compose files. - Automate database backups — A cron job or scheduled container that dumps the Immich PostgreSQL database regularly to the
backups/directory, then syncs to Backblaze. - Consider Tailscale as a Twingate alternative — Tailscale is another great option for remote access with a generous free tier and simpler setup for personal use.
Final Thoughts
You don’t need a rack, a hypervisor, or enterprise hardware to run a capable homelab. A Mac Mini with an external drive, Docker via Colima, and a few well-chosen services gives you:
- A self-hosted photo library with AI search (Immich)
- Automatic internal TLS for all services (Caddy)
- A clean dashboard (Homepage)
- Remote access from anywhere (Twingate)
- Offsite backups (Backblaze B2)
- Auto-start on boot (launchd)
- Near-zero noise and minimal power draw
The entire setup is defined in a single docker-compose.yml and a handful of config files. It’s portable — unplug the Thunderbolt drive, plug it into another Mac, set two environment variables, and you’re running again.
If you’ve been on the fence about starting a homelab because the typical x86/Proxmox route feels like overkill, give the Mac Mini approach a try. It’s simpler than you think.