Automate Dagster User Deployments with Flux: A GitOps Approach
User-specific deployments in a data orchestration platform like Dagster can become brittle and hard to manage at scale especially when user pipelines evolve independently. In this post, we’ll show how to automate Dagster user deployments using Flux, Helm, and an S3-backed ConfigMap to externalize the user deployment configuration. We’ll also walk through a purpose-built GitHub Actions pipeline that updates the config and Flux reconciliation automatically.
Whether you’re orchestrating data pipelines for internal teams or delivering managed environments to users, this setup ensures deployments are dynamic, declarative, and version-controlled—the GitOps trifecta.
Approach Summary
Automate Dagster user deployments by externalizing workspace.yaml
in S3 and controlling updates via GitHub Actions. Flux continuously syncs both the Dagster HelmRelease and the centralized config from S3, ensuring GitOps consistency across environments.
Dagster Deployment with external ConfigMap
We deploy Dagster to our Kubernetes cluster using Flux’s HelmRelease CRD. The magic lies in activating workspace.enabled: true and sourcing Dagster’s workspace.yaml from an external ConfigMap (dagster-workspace-yaml):
Scripts to update configmap and upload to S3
To ensure Dagster deployments are dynamic and self-service, we maintain a shared workspace.yaml
in an S3 bucket. Two purpose-built scripts help safely modify this configuration and push updates without race conditions:
A Python script that:
Locates the embedded
workspace.yaml
string inside a KubernetesConfigMap
manifestDeserializes its contents using
ruamel.yaml
Adds or removes a
grpc_server
block for a user code server, based on--action
,--host
,--port
, and--location-name
Overwrites the manifest in-place so it's ready to apply
It’s smart enough to avoid duplicates, skip if no change is needed, and gracefully handle malformed or empty input.
A Bash wrapper that makes the process atomic and safe:
Fetches the latest
workspace.yaml
and its ETag from S3Downloads and runs the Python patcher locally
Uploads the patched file back to S3 only if the ETag matches (via
--if-match
), preventing conflicting writesRetries the whole operation with backoff if an ETag collision occurs
This ensures that even in multi-developer or multi-pipeline environments, your S3-stored configuration remains consistent and race-free.
If S3 versioning is enabled on the bucket, every update becomes trackable and reversible—great for audit trails or rollbacks.
GitHub Actions Pipeline for User Deployments
The user deployment repo which has actual pipeline code triggers a pipeline on push. It builds the Dagster user image, deploys it with Helm (dagster-user-deployments Helm chart), and then updates workspace.yaml in S3.
Flux to install and reconcile configmap using Bucket Source
The workspace.yaml lives in an AWS S3 bucket. Flux tracks this object using the Bucket source and applies it into the cluster via a Kustomization. This allows updates to be pushed outside Git and still reconciled declaratively:
Refresh Dagster UI to Load New Deployments
After the updated workspace.yaml
ConfigMap is applied to the cluster Flux as part of its reconcile process , the Dagster UI will not reflect the changes immediately. To ensure the new user code servers are recognized:
Navigate to the Deployments tab in the Dagster UI.
Click Reload All to trigger a full refresh of the
workspace.yaml
.The updated list of code locations will appear, including any newly added or removed deployments.
Dagster dynamically reads the
workspace.yaml
on reload, so there's no need to restart the instance just a single click to activate the latest config.