We’ve recently started bringing Ansible into our deployment pipeline — and it’s changed how we think about shipping. No more SSHing into servers and running commands by hand. No more wondering which version is actually running. Push a tag, and it goes.
This post walks through the exact setup: a Go app, a VPS, and a pipeline where GitHub Actions builds your binary, Ansible deploys it, and Pebble manages the running service.
So let’s get to it!
The Stack
GitHub Actions handles the build and orchestrates the whole pipeline. It compiles your Go binary in CI, packages it, and triggers Ansible. We won’t go deep on GitHub Actions here — their official docs are excellent. What matters is: it’s the thing that kicks everything else off.
Ansible handles the actual deployment to your server — copying the binary, stopping the old service, starting the new one. We’ll spend a bit more time on this one since it might be new to you.
Pebble manages your service on the server. Think of it as a lightweight service manager — you define your app as a Pebble service, and then you can pebble start, pebble stop, and check pebble services to see what’s running. We have a full guide on setting up Pebble at Pebble Guide — this post assumes you’ve already done that.
How the Pieces Fit Together
Before diving into the files, it helps to understand the control flow.
- GitHub Actions runs in GitHub’s cloud — this is our CI environment
- It builds the Go binary
- It then runs Ansible from the CI runner
- Ansible connects over SSH to our VPS
- On the VPS, Pebble manages the running service
So the chain looks like this:
figure 1.0: shows data workflow from tag creation to your running app on your vps
The CI runner acts as the Ansible control node. The VPS is the managed host. Pebble runs entirely on the VPS and is responsible for keeping the process alive.
What We’re Deploying
A minimal Go API using Fiber:
package main
import (
"log"
"github.com/gofiber/fiber/v3"
)
func main() {
app := fiber.New()
app.Get("/", func(c fiber.Ctx) error {
return c.SendString("Hello, World!")
})
log.Fatal(app.Listen(":3000"))
}
One endpoint. One binary. The app is simple on purpose — the focus is the pipeline.
go mod init github.com/your-org(or username)/your-app
go get github.com/gofiber/fiber/v3
A Quick Word on Ansible
Before we get into the playbook, it’s worth understanding what Ansible actually is and why we reached for it.
Ansible is an agentless automation tool. There’s nothing to install on your server — it connects over plain SSH and runs tasks in sequence. Your deployment steps are written in YAML (called a playbook), which means they’re readable, version-controlled, and auditable.
There are a few key components:
- Control node — the machine running Ansible (in our case, the GitHub Actions runner)
- Inventory — defines which servers to connect to and how
- Playbook — a YAML file describing the desired state
- Modules — the building blocks Ansible uses (
copy,file,command, etc.)
Figure 1.1: Deployment pipeline — from a tagged release on the developer’s machine, through the GitHub Actions runner acting as the Ansible control node, to a running service managed by Pebble on the VPS
One important concept worth knowing is idempotency. It means you can run the same playbook multiple times and end up with the same result. A properly written playbook describes the desired state, not just a sequence of shell commands. If nothing has changed, Ansible reports no changes. That predictability is what makes infrastructure manageable over time.
Project Structure
your-repo/
├── main.go
├── go.mod
├── go.sum
└── scripts/
└── ansible/
├── deploy.yml
└── ansible_inventory/
└── hosts.ini # in our case the hosts.ini or if you use yaml format shall be generated autmatically for us: Please never commit ths!
└── .github/
└── workflows/
└── deploy.yml
Note that hosts.ini is generated dynamically in the GitHub Actions workflow from secrets — it’s never committed to the repo. Please make sure your own directory structure matches the paths referenced in the workflow.
The Ansible Playbook
This is what actually runs on your server during a deploy:
# scripts/ansible/deploy.yml
- name: Deploy My Go App
hosts: "{{ custom_hosts | default('vps') }}"
vars:
deploy_dir: "/opt/myapp"
app_user: "deploy"
app_version: "{{ app_version | default('manual-deploy') }}"
pebble_service_name: "myapp"
binary_name: "myapp_linux_arm64"
tasks:
- name: Stop pebble service
become: yes
command: pebble stop {{ pebble_service_name }}
ignore_errors: yes
- name: Ensure deploy directory exists
file:
path: "{{ deploy_dir }}"
state: directory
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: '0755'
- name: Upload application binary
copy:
src: "{{ binary_name }}"
dest: "{{ deploy_dir }}/{{ binary_name }}"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: '0755'
- name: Start pebble service
become: yes
command: pebble start {{ pebble_service_name }}
- name: Wait for service to start
pause:
seconds: 5
- name: Verify service is running
command: pebble services {{ pebble_service_name }}
register: service_status
changed_when: false
- name: Display final status
debug:
msg: |
Deployment complete!
Service: {{ pebble_service_name }}
Version: {{ app_version }}
Status: {{ service_status.stdout_lines | last }}
Walk through what’s happening:
- Stop the running Pebble service (
ignore_errors: yeshandles the case where it isn’t running yet on first deploy) - Ensure the deploy directory exists with the right permissions
- Copy the new binary up
- Start the service again via Pebble
- Wait 5 seconds as a safety buffer, then verify it came up
In production, a better approach would be to poll a health-check endpoint rather than a fixed pause. This keeps things minimal for now.
The GitHub Actions Workflow
Three jobs: test, build, deploy. Each depends on the previous.
# .github/workflows/deployi.yml
name: Build and Deploy
on:
push:
tags:
- 'v*'
workflow_dispatch:
env:
GO_VERSION: '1.22'
APP_NAME: 'myapp'
jobs:
test:
name: Run Tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
- name: Run go vet
run: go vet ./...
- name: Run unit tests
run: go test ./...
build:
name: Build Binary
needs: test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
- name: Build Linux binary
run: |
GOOS=linux GOARCH=arm64 go build -o ${{ env.APP_NAME }}_linux_arm64 .
- name: Package for deployment
run: |
mkdir -p deployment_package
cp ${{ env.APP_NAME }}_linux_arm64 deployment_package/
echo "Version: ${{ github.ref_name }}" > deployment_package/VERSION.txt
echo "Commit: ${{ github.sha }}" >> deployment_package/VERSION.txt
echo "Deployed: $(date -u)" >> deployment_package/VERSION.txt
- name: Upload artifact
uses: actions/upload-artifact@v4
with:
name: deployment-package
path: deployment_package/
retention-days: 30
deploy:
name: Deploy to VPS
needs: build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Download deployment package
uses: actions/download-artifact@v4
with:
name: deployment-package
path: deployment_package/
- name: Setup SSH
run: |
mkdir -p ~/.ssh
echo "${{ secrets.SSH_PRIVATE_KEY }}" > ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
ssh-keyscan -p ${{ secrets.SSH_PORT }} ${{ secrets.SSH_HOST }} >> ~/.ssh/known_hosts
- name: Create Ansible inventory
run: |
mkdir -p scripts/ansible/ansible_inventory
cat > scripts/ansible/ansible_inventory/hosts.ini << EOF
[vps]
${{ secrets.SSH_HOST }} ansible_user=${{ secrets.SSH_USER }} ansible_port=${{ secrets.SSH_PORT }} ansible_ssh_private_key_file=~/.ssh/id_ed25519
EOF
- name: Install Ansible
run: |
sudo apt update
sudo apt install -y ansible
- name: Copy binary to Ansible directory
run: |
cp deployment_package/${{ env.APP_NAME }}_linux_arm64 scripts/ansible/
- name: Run Ansible playbook
run: |
cd scripts/ansible
ansible-playbook -i ansible_inventory/hosts.ini deploy.yml \
-e "app_version=${{ github.ref_name }}" \
-e "ansible_become_pass=${{ secrets.SERVER_PASSWORD }}"
Getting this far, here are few things worth noting:
The trigger is a version tag. push: tags: - 'v*' means this only runs when you push a tag like v1.0.0 — not every commit. You can also control when a deploy happens. workflow_dispatch lets you trigger it manually from the GitHub UI too.
We build for linux/arm64. The GOOS=linux GOARCH=arm64 flags tell Go to cross-compile for an ARM64 Linux server — which is what our VPS runs. If your server is x86-based (Intel or AMD, 64-bit), just go and change GOARCH to amd64. amd64 is just a the cover or wrapper term for 64-bit x86 architecture, regardless of the chip manufacturer.
The inventory is generated, not committed. Your server’s IP, user, and port come from GitHub secrets and get written to hosts.ini at deploy time. Nothing sensitive ever touches your repo.
Artifacts pass the binary between jobs. Jobs don’t share a filesystem — the build job uploads the binary as an artifact, the deploy job downloads it.
Production Considerations
This setup works well for a single VPS deployment, but a few things worth refining before you call it production-grade:
- Passwordless sudo — instead of passing
ansible_become_pass, configure limited passwordless sudo for the deploy user - Restricted privileges — limit which commands the deploy user can run with sudo
- SSH hardening — disable password authentication, key-based auth only
- Secret management — for larger systems, consider Ansible Vault or an external secrets manager
- Never commit inventory files —
hosts.inicontains your server address and username. Always generate it at deploy time from secrets, never check it into the repo
The goal here is to reduce blast radius while keeping automation intact.
Secrets You Need in GitHub
Go to your repo → Settings → Secrets and variables → Actions:
| Secret | What It Is |
|---|---|
SSH_PRIVATE_KEY | Private key to SSH into your server |
SSH_HOST | Your server’s IP or hostname |
SSH_PORT | SSH port (usually 22, but can be what ever your port you’re accessing your server from) |
SSH_USER | The user Ansible connects as |
SERVER_PASSWORD | sudo password for privilege escalation |
Deploying
Once everything is wired up:
git tag v0.0.1
git push origin v0.0.1
Now you can watch the Actions tab. Tests run, the binary gets built and packaged, Ansible copies it to your server and Pebble restarts the service. The final task prints the service status so you know it came up clean.
What This Gives You
Deployment stops being a ritual and becomes a commit.
No SSH sessions. That menas no guessing what’s running. No tribal knowledge.
Just a tagged release and a reproducible pipeline.
From here you can extend this with rollbacks, multi-environment deployments, blue/green strategies, or automated health checks. But the baseline is that we have: predictable, auditable, controlled releases.