The CI/CD pipeline problem is not a lack of tools. It’s that your pipeline only runs in one place — GitHub Actions, GitLab CI, or Jenkins — and the configuration is tightly coupled to that platform. When you need to run the same steps locally, you’re scripting around it. When you switch providers, you rewrite everything. Dagger.io proposes a different model: define your pipeline as code using a real programming language, run it identically anywhere a container runtime exists.
After two years of production use across teams ranging from solo developers to enterprise engineering organizations, Dagger has developed enough real-world patterns to evaluate honestly. This article covers the architecture, practical usage, the genuine advantages, and the cases where the added complexity is not worth it.
The Core Problem Dagger Solves
YAML-based CI/CD has a fundamental limitation: it is not a programming language. You cannot easily abstract repeated patterns, write unit tests for pipeline logic, use type checking, or import libraries. Complex pipelines become hundreds of lines of YAML with nested conditionals and shell scripts embedded in strings.
The workarounds — reusable workflows, shared actions, pipeline templates — help but don’t eliminate the problem. The pipeline still only runs in the CI environment. Local debugging means either pushing commits to trigger CI or carefully reconstructing the environment with Docker and shell scripts.
Dagger’s answer is to treat the pipeline as an application. You write it in Go, Python, TypeScript, or PHP. You run it locally with dagger call. The same call works in GitHub Actions, GitLab CI, CircleCI, or any environment that can run a container. The pipeline is portable because Dagger normalizes execution through its own container engine.
How Dagger Works Architecturally
Dagger runs a local container engine (the Dagger Engine) that handles all pipeline execution. Your pipeline code communicates with the engine via a GraphQL API over a Unix socket. When you write a Dagger function in Go, you’re building a client that constructs a DAG (directed acyclic graph) of operations and submits it to the engine.
The engine handles:
- Caching at every layer — not just Docker layer caching, but full content-addressed caching of any pipeline step’s outputs
- Parallel execution of independent DAG branches automatically
- Secrets management — secrets are injected into the engine and never written to disk or logs
- Cross-platform execution — the same pipeline runs on Linux, macOS, and in CI
This is meaningfully different from “run a Docker container in CI.” The Dagger Engine is aware of the entire DAG and can make global optimization decisions — parallelizing steps, sharing cache layers across unrelated pipeline runs, and propagating secrets without exposing them.
Writing a Real Pipeline in Go
A Dagger module is a Go (or Python/TypeScript) package that exports functions. Here is a complete, realistic pipeline for a Go web service:
// main.go — Dagger pipeline for a Go service
package main
import (
"context"
"dagger.io/dagger"
"fmt"
)
type Pipeline struct{}
// Test runs all unit and integration tests
func (p *Pipeline) Test(ctx context.Context, src *dagger.Directory) (string, error) {
client, err := dagger.Connect(ctx)
if err != nil {
return "", err
}
defer client.Close()
return client.Container().
From("golang:1.22-alpine").
WithDirectory("/src", src).
WithWorkdir("/src").
WithMountedCache("/root/go/pkg/mod", client.CacheVolume("go-mod")).
WithMountedCache("/root/.cache/go-build", client.CacheVolume("go-build")).
WithExec([]string{"go", "test", "./...", "-race", "-count=1"}).
Stdout(ctx)
}
// Build compiles the binary and packages it into a minimal image
func (p *Pipeline) Build(ctx context.Context, src *dagger.Directory) (*dagger.Container, error) {
client, err := dagger.Connect(ctx)
if err != nil {
return nil, err
}
defer client.Close()
// Build stage
binary := client.Container().
From("golang:1.22-alpine").
WithDirectory("/src", src).
WithWorkdir("/src").
WithMountedCache("/root/go/pkg/mod", client.CacheVolume("go-mod")).
WithMountedCache("/root/.cache/go-build", client.CacheVolume("go-build")).
WithEnvVariable("CGO_ENABLED", "0").
WithEnvVariable("GOOS", "linux").
WithExec([]string{"go", "build", "-ldflags=-s -w", "-o", "/out/server", "./cmd/server"}).
File("/out/server")
// Runtime image — distroless for minimal attack surface
return client.Container().
From("gcr.io/distroless/static-debian12:nonroot").
WithFile("/server", binary).
WithEntrypoint([]string{"/server"}), nil
}
// Publish builds, tests, and pushes the image to a registry
func (p *Pipeline) Publish(
ctx context.Context,
src *dagger.Directory,
registry string,
tag string,
registryUser string,
registryToken *dagger.Secret,
) (string, error) {
client, err := dagger.Connect(ctx)
if err != nil {
return "", err
}
defer client.Close()
// Test first
_, err = p.Test(ctx, src)
if err != nil {
return "", fmt.Errorf("tests failed: %w", err)
}
image, err := p.Build(ctx, src)
if err != nil {
return "", err
}
ref := fmt.Sprintf("%s:%s", registry, tag)
return image.
WithRegistryAuth(registry, registryUser, registryToken).
Publish(ctx, ref)
}
Run this locally:
# Run tests locally — identical to CI
dagger call test --src=.
# Build and push to registry with a secret token
dagger call publish \
--src=. \
--registry=ghcr.io/myorg/myapp \
--tag=latest \
--registry-user=myuser \
--registry-token=env:REGISTRY_TOKEN
Integrating with GitHub Actions
The CI integration is intentionally minimal. Dagger handles the actual work; the CI system just triggers it:
name: CI
on: [push, pull_request]
jobs:
pipeline:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Dagger CLI
run: curl -L https://dl.dagger.io/dagger/install.sh | sh
- name: Run tests
run: dagger call test --src=.
- name: Build and publish
if: github.ref == 'refs/heads/main'
env:
REGISTRY_TOKEN: ${{ secrets.REGISTRY_TOKEN }}
run: |
dagger call publish \
--src=. \
--registry=ghcr.io/${{ github.repository }} \
--tag=${{ github.sha }} \
--registry-user=${{ github.actor }} \
--registry-token=env:REGISTRY_TOKEN
This same dagger call command works identically in GitLab CI, CircleCI, or a local terminal. The GitHub Actions file is essentially just a trigger with credentials passed in.
The Caching Advantage
Dagger’s caching is one of its most underappreciated features. Cache volumes are content-addressed and persistent across pipeline runs. The Go module cache in the example above (go-mod volume) persists between every run on that machine. In CI, this means the second run of a pipeline with unchanged dependencies completes the dependency download phase in under a second.
More importantly, cache is shared across branches. If your main branch and a feature branch both depend on the same dependencies, they share the cache. With traditional layer caching in Docker or GitHub Actions cache actions, cache is typically per-branch.
# Named cache volumes — shared across all pipeline runs on this engine
client.CacheVolume("go-mod") // Go modules
client.CacheVolume("node-modules") // npm/yarn packages
client.CacheVolume("gradle-cache") // Gradle build cache
client.CacheVolume("cargo-registry") // Rust crate registry
Dagger Cloud: Shared Cache Across Machines
The free tier of Dagger Cloud provides distributed cache sharing. Cache entries written on one CI runner are available to all other runners. This eliminates the cold-cache problem for horizontally-scaled CI workers — a significant improvement over GitHub Actions’ cache actions, which are per-repo and size-limited.
# Set DAGGER_CLOUD_TOKEN environment variable and cache is automatically shared
export DAGGER_CLOUD_TOKEN=dag_token_xxxxx
# Now all pipeline runs on any machine share cache
dagger call test --src=.
When Dagger Is Worth the Investment
Dagger adds a layer of abstraction and a learning curve. It’s worth the investment when:
- Pipeline complexity is high — more than 50 lines of YAML, complex conditional logic, multiple reusable components
- Local debugging is painful — your team regularly pushes “fix CI” commits because the pipeline only runs in CI
- Multi-platform pipelines — the same steps need to run on different CI systems (e.g., internal Jenkins and GitHub Actions)
- You write Go, Python, or TypeScript — and want to use real programming constructs for pipeline logic
When Standard YAML CI Is Still Better
For simple projects, Dagger adds friction without proportional benefit:
- A small project with 20 lines of GitHub Actions YAML and three steps (test, build, deploy) does not benefit from being rewritten as a Dagger module
- Teams unfamiliar with the Dagger model will have a steeper onboarding curve
- If you never need to run pipeline steps locally, the portability advantage is irrelevant
Practical Adoption Strategy
The most effective way to adopt Dagger is incrementally:
- Start with the most painful part of your pipeline — the step that’s hardest to run locally or most brittle in CI
- Extract that step into a Dagger function and call it from your existing YAML CI file
- Verify it works identically locally and in CI
- Expand to adjacent steps as the value becomes clear
You don’t need to rewrite your entire pipeline at once. Dagger is designed to coexist with existing CI configuration rather than requiring a full migration.
Conclusion
Dagger solves a real problem: the gap between local development and CI execution, and the limitations of YAML as a programming language for complex pipelines. The architecture is technically sound — a container-native execution engine with content-addressed caching and a real language API is a significant improvement over YAML-embedded shell scripts.
The trade-off is complexity. Adopting Dagger requires learning the SDK, understanding the engine model, and accepting that your pipeline is now application code that needs the same maintenance as any other code. For teams whose pipelines have grown painful, that trade-off is clearly worthwhile. For simple projects with straightforward CI needs, standard YAML workflows remain the pragmatic choice.
The trend toward programmable CI infrastructure is clear. Dagger is the most mature implementation of this approach, and the investment in learning it pays dividends as pipeline complexity grows.
