I don’t fully understand the problem this is trying to solve. Or at least, if this solves your problem then it feels like you have bigger problems?
If you have staging/production deployments in CI/CD and have your Kubernetes clusters managed in code, then adding feature deployments is not any different from what you have done already. Paying for a third party app seems (to me) both a waste of money and a problem waiting to happen.
How we do it: For a given helm chart, we have three sets of value files; prod, staging and preview. An Argo application exists for each prod, staging and preview instance.
When a new branch is created, a pipeline runs that renders a new preview chart (with some variables based on branch/tag name), creates a new argo application and commits this to the kubernetes repo. Argo picks it up, deploys it to the appropriate cluster and that’s it. Ingress hostnames get picked up and DNS records get created.
When the branch gets deleted, a job runs to remove the argo application and done.
It’s the same for staging and production, I really wouldn’t want a different deployment pipeline for preview environments - that just increases complexity and the chances of things going wrong.
Thanks for the feedback. I agree your setup sounds solid.
What you're describing - different values, Argo apps per branch, pipelines that create and clean up - absolutely works. In my experience though, getting to that point involves a lot of work: Argo setup, application generation per branch, DNS and ingress wiring, and ongoing maintenance. It's reliable once built, but non-trivial to design and keep running.
The scope of what I'm trying to solve is narrower. In my case, Helm chart changes (often from external contributors) were hard to review because validating them meant manually standing up environments or spinning up short-lived clusters. Many reviewers couldn't realistically run the pipeline themselves, so reviews often stalled. That was the core motivation.
This isn't aimed at teams that already have a mature GitOps setup like yours — I agree Chart Preview probably won't add much value there. It's for teams that don't have that infrastructure yet, or don't want to invest in building and maintaining it, and want to get to the same outcome with less effort.
I appreciate you taking the time to describe your approach — it's a good reference point for where many teams eventually end up.
Great way to apply your gathered Kubernetes knowledge!
But I find the pricing tough and I don't like to give 3rd party tools that level of access to my clusters.
I know its early state but I see several problems: Right now it seems to be GH only, a lot of people are on selfhosted GitLab. Does it only support helm or also kustomize and raw extra manifests. What about GitOps?
I've build similar solution for clints, mostly only CI based. Often with Flux/ArgoCD support. The thing I found difficult was to show the diff of the rendered manifest also while applying the app. Since I'm not a fan of the rendered manifest pattern this often involved extra branches. Is this handled by the app?
Thanks for the thoughtful feedback — these are all fair concerns.
The app doesn’t access your production clusters. Previews run in managed, isolated clusters, and each preview gets its own namespace with deny-all NetworkPolicies, quotas, and automatic teardown. That said, if the concern is about installing charts into any external K8 cluster at all, then I agree this won’t be a fit — and that’s a reasonable constraint.
It’s GitHub-first today simply because that’s where I personally hit the problem. GitLab is supported via the REST API using a personal access token that you can scope as tightly as you want, so you can trigger previews from GitLab CI today.
Native GitLab App integration (auto-triggering on MRs, status updates, etc.) is something I’ve thought about, but I wanted to validate the core workflow first.
It is intentionally Helm-only for now. The specific pain I was trying to solve was reviewing Helm changes — values layering, dependencies, and template changes — by seeing them running in a real environment, rather than trying to generalise across all deployment models.
I’m not trying to replace or compete with Flux or Argo CD. The idea is to validate Helm changes before they land in a GitOps repo or get promoted through environments — essentially answering the question of “does this look OK and actually work when deployed, so should be safe to merge?”
It doesn’t expose rendered manifest diffs today, but I agree that would be valuable — especially a readable “what changed after Helm rendering” view tied back to the PR. I’m still thinking through the cleanest way to do that without adding a lot of complexity to the workflow.
Appreciate you taking the time give your feedback. Thanks.
Congrats! I could see the value of this, for sure. I handle this problem by spinning up a preview environment in a namespace. Each branch gets its own and a script takes care of setting up namespaces for a couple of shared resources for staging (rabbit and temporal).
It was a lot of work setting that up though. Preview environments based on a helm deploy makes sense. I wish this had been available before I did all that.
Thanks for the feedback — you’re spot on about the setup this is trying to speed up. The namespace-per-branch approach works well (and that’s what this does), but the setup around ingress, DNS, secrets, and cleanup tends to be the real time sink. Glad it resonates.
My motivation was to give PR/MR reviewers a very low-friction way to see a Helm chart change running.
The workflow is intentionally simple: install a GitHub App (or call a REST API in other workflows), open a PR/MR, and you get a live preview. That’s it.
There’s no ArgoCD setup, no Helmfile, no cluster provisioning, no DNS wiring to build or maintain. The goal was to make it trivial for reviewers to see “this PR running” — especially for public Helm charts where contributors and reviewers can’t realistically be expected to set up infrastructure just to demo a change.
If you already run ephemeral previews via ArgoCD or Helmfile, this probably isn’t adding much value. Those approaches work well once they’re in place. Chart Preview is aimed at the cases where teams want PR previews without having to design, build, and maintain that machinery themselves.
That makes sense — thanks for clarifying. Framing it as “zero infra ownership, just a reviewer convenience” really helps explain where this fits compared to ArgoCD-style previews.
I don’t fully understand the problem this is trying to solve. Or at least, if this solves your problem then it feels like you have bigger problems?
If you have staging/production deployments in CI/CD and have your Kubernetes clusters managed in code, then adding feature deployments is not any different from what you have done already. Paying for a third party app seems (to me) both a waste of money and a problem waiting to happen.
How we do it: For a given helm chart, we have three sets of value files; prod, staging and preview. An Argo application exists for each prod, staging and preview instance.
When a new branch is created, a pipeline runs that renders a new preview chart (with some variables based on branch/tag name), creates a new argo application and commits this to the kubernetes repo. Argo picks it up, deploys it to the appropriate cluster and that’s it. Ingress hostnames get picked up and DNS records get created.
When the branch gets deleted, a job runs to remove the argo application and done.
It’s the same for staging and production, I really wouldn’t want a different deployment pipeline for preview environments - that just increases complexity and the chances of things going wrong.
Thanks for the feedback. I agree your setup sounds solid.
What you're describing - different values, Argo apps per branch, pipelines that create and clean up - absolutely works. In my experience though, getting to that point involves a lot of work: Argo setup, application generation per branch, DNS and ingress wiring, and ongoing maintenance. It's reliable once built, but non-trivial to design and keep running.
The scope of what I'm trying to solve is narrower. In my case, Helm chart changes (often from external contributors) were hard to review because validating them meant manually standing up environments or spinning up short-lived clusters. Many reviewers couldn't realistically run the pipeline themselves, so reviews often stalled. That was the core motivation.
This isn't aimed at teams that already have a mature GitOps setup like yours — I agree Chart Preview probably won't add much value there. It's for teams that don't have that infrastructure yet, or don't want to invest in building and maintaining it, and want to get to the same outcome with less effort.
I appreciate you taking the time to describe your approach — it's a good reference point for where many teams eventually end up.
Great way to apply your gathered Kubernetes knowledge! But I find the pricing tough and I don't like to give 3rd party tools that level of access to my clusters. I know its early state but I see several problems: Right now it seems to be GH only, a lot of people are on selfhosted GitLab. Does it only support helm or also kustomize and raw extra manifests. What about GitOps?
I've build similar solution for clints, mostly only CI based. Often with Flux/ArgoCD support. The thing I found difficult was to show the diff of the rendered manifest also while applying the app. Since I'm not a fan of the rendered manifest pattern this often involved extra branches. Is this handled by the app?
Thanks for the thoughtful feedback — these are all fair concerns.
The app doesn’t access your production clusters. Previews run in managed, isolated clusters, and each preview gets its own namespace with deny-all NetworkPolicies, quotas, and automatic teardown. That said, if the concern is about installing charts into any external K8 cluster at all, then I agree this won’t be a fit — and that’s a reasonable constraint.
It’s GitHub-first today simply because that’s where I personally hit the problem. GitLab is supported via the REST API using a personal access token that you can scope as tightly as you want, so you can trigger previews from GitLab CI today.
Native GitLab App integration (auto-triggering on MRs, status updates, etc.) is something I’ve thought about, but I wanted to validate the core workflow first.
It is intentionally Helm-only for now. The specific pain I was trying to solve was reviewing Helm changes — values layering, dependencies, and template changes — by seeing them running in a real environment, rather than trying to generalise across all deployment models.
I’m not trying to replace or compete with Flux or Argo CD. The idea is to validate Helm changes before they land in a GitOps repo or get promoted through environments — essentially answering the question of “does this look OK and actually work when deployed, so should be safe to merge?”
It doesn’t expose rendered manifest diffs today, but I agree that would be valuable — especially a readable “what changed after Helm rendering” view tied back to the PR. I’m still thinking through the cleanest way to do that without adding a lot of complexity to the workflow.
Appreciate you taking the time give your feedback. Thanks.
Congrats! I could see the value of this, for sure. I handle this problem by spinning up a preview environment in a namespace. Each branch gets its own and a script takes care of setting up namespaces for a couple of shared resources for staging (rabbit and temporal).
It was a lot of work setting that up though. Preview environments based on a helm deploy makes sense. I wish this had been available before I did all that.
Thanks for the feedback — you’re spot on about the setup this is trying to speed up. The namespace-per-branch approach works well (and that’s what this does), but the setup around ingress, DNS, secrets, and cleanup tends to be the real time sink. Glad it resonates.
Thanks again.
Nice idea. How does this compare to running ephemeral preview environments via ArgoCD or Helmfile today?
Thank you for the feedback.
My motivation was to give PR/MR reviewers a very low-friction way to see a Helm chart change running.
The workflow is intentionally simple: install a GitHub App (or call a REST API in other workflows), open a PR/MR, and you get a live preview. That’s it.
There’s no ArgoCD setup, no Helmfile, no cluster provisioning, no DNS wiring to build or maintain. The goal was to make it trivial for reviewers to see “this PR running” — especially for public Helm charts where contributors and reviewers can’t realistically be expected to set up infrastructure just to demo a change.
If you already run ephemeral previews via ArgoCD or Helmfile, this probably isn’t adding much value. Those approaches work well once they’re in place. Chart Preview is aimed at the cases where teams want PR previews without having to design, build, and maintain that machinery themselves.
That makes sense — thanks for clarifying. Framing it as “zero infra ownership, just a reviewer convenience” really helps explain where this fits compared to ArgoCD-style previews.