Deployment Orchestration for Multi-Environment EKS

Deployment Orchestration for Multi-Environment EKS: ArgoCD, GitOps, and When Octopus Deploy Wins

This article is about deploying containerized applications across multiple environments — dev EKS, prod EKS, air-gapped GovCloud, cross-cloud targets — in a way that is secure, auditable, and doesn't multiply developer complexity with every new environment you add.

If you've read about GitOps and ArgoCD, you've probably encountered two things that sound like they solve everything:

  • ArgoCD — a Kubernetes controller that continuously syncs a cluster to a Git repo
  • GitOps promotion — the idea that "promotion" is just a Git commit to the next environment's config

Both are real and useful. But as soon as your estate grows — separate AWS accounts that can't peer, air-gapped compliance environments, cross-cloud targets, developer teams that need to iterate fast without opening infra PRs — the GitOps-only model starts accumulating hidden complexity that it offloads onto your team.

This guide walks through the complete mental model: what ArgoCD actually does, where it genuinely wins, where it breaks down, and why an orchestrator like Octopus Deploy with outbound-only agents is often the right answer for multi-environment production estates.


1. The Players: Who's Who in This System

Before getting into patterns, define every actor so there's no confusion.

Source Control

  • GitHub.com / GitHub Enterprise Server (GHES) — where application code, Dockerfiles, Kubernetes manifests, and Terraform configs live. GHES is the self-hosted version that can run inside a private VPC.
  • App repo — the repository a developer works in: source code, Dockerfile, skaffold.yaml, and the app's base Kubernetes manifests.
  • Infra-configs repo — the repository that owns environment-specific overlays and Terraform for cloud resources. This is what ArgoCD watches. In a strict compliance model, this is a separate repo per environment with a separate deploy key.

Build and CI

  • GitHub Actions — CI/CD workflows that run on push, PR, and schedule. Builds images, runs Trivy scans, signs with Cosign, pushes to ECR, and updates image tags in infra-configs. Critically: GitHub Actions never touches a Kubernetes cluster directly in a properly designed system.
  • Dependabot — GitHub's automated dependency updater. Opens PRs when base images, Terraform providers, Helm chart versions, or GitHub Actions SHAs are outdated. Those PRs go through the same CI pipeline as human commits.1
  • ARC (Actions Runner Controller) — a Kubernetes operator that runs GitHub Actions runners as pods inside EKS, giving runners network proximity to internal services for integration testing without exposing the cluster API externally.2

Registry

  • Amazon ECR — the image registry. Reachable from EKS via VPC endpoint (no internet egress required). Supports immutable tags, image scanning via AWS Inspector, and CloudTrail audit logging for every push and pull.3

Config Rendering

  • Kustomize — a tool for layering Kubernetes manifests. A base/ defines the app skeleton; overlays/<env>/ patches only what differs per environment. ArgoCD renders Kustomize natively.45
  • Helm — the package manager for Kubernetes manifests. Overlapping use case with Kustomize; common in vendor charts. ArgoCD supports both.6
  • External Secrets Operator (ESO) — a Kubernetes operator that syncs secrets from AWS Secrets Manager (or Vault) into Kubernetes Secrets at runtime. The app never sees raw credentials; ESO injects them as env vars or mounted files.78

Deployment

  • ArgoCD — a Kubernetes controller that runs inside a cluster, watches a Git repo, renders manifests (Kustomize or Helm), and applies them to the cluster. It does not push; it pulls. It does not know about other clusters, branches, or environments except its own. It has no native concept of "promote this to the next env."910
  • Octopus Deploy — a release orchestration platform. Models a named release (e.g., 1.4.2) traveling through an ordered set of environments (dev → staging → prod). Deploys to targets via a Tentacle agent — a lightweight process installed inside the target that opens an outbound HTTPS connection to Octopus Server. The cluster initiates the connection; no inbound ports required.1112
  • Skaffold — a developer tool that watches source files, rebuilds images, and redeploys to a local or remote Kubernetes cluster on every save. The developer's local on-ramp to the same manifests that prod runs.1314

Local Development

  • Docker Desktop / minikube / kind — local Kubernetes clusters that run on a developer's laptop. Used with Skaffold for a complete local dev environment.
  • Docker Compose — for teams that don't need a full local K8s cluster; runs the app and its dependencies (Postgres, Redis, etc.) side by side.

2. What ArgoCD Actually Does (and Doesn't Do)

ArgoCD is a config sync solution. It answers one question: "Does the live state of this cluster match what Git says it should be?" If not, it reconciles.1516

flowchart LR  
    subgraph Git["Git Repo"]
        manifests["Kustomize / Helm\nManifests"]
    end

    subgraph ArgoCD["ArgoCD Controller"]
        render["Render"]
        diff["Diff"]
        apply["Apply"]
    end

    subgraph Cluster["EKS Cluster"]
        live["Live State"]
    end

    manifests -->|pull| render
    render --> diff
    diff -->|drift detected| apply
    apply --> live
    live -->|compare| diff

Given a source (repoURL, targetRevision, path) and a destination (server, namespace), ArgoCD:

  1. Renders the manifests (Kustomize, Helm, or plain YAML) from Git
  2. Compares the rendered output to what's actually running in the cluster
  3. If there's drift, applies the diff via server-side apply
  4. Repeats on a timer (default 3 minutes) or immediately on a Git webhook10

This is powerful. Drift detection and self-healing mean that even if someone manually kubectl applys something in prod, ArgoCD reverts it on the next sync cycle — a strong compliance control.

What ArgoCD Does Not Do

  • It is not aware of other branches or environments. A single ArgoCD Application knows one Git ref and one cluster. It has no concept of "after this syncs successfully, do something in the next environment."9
  • It does not model a release. There is no "Release 1.4.2" object in ArgoCD. There is only "what does Git currently say, and does the cluster match it."
  • It does not orchestrate non-Kubernetes resources. RDS, IAM roles, VPCs, Lambda functions — none of these are ArgoCD's domain.
  • It does not push. The cluster must be able to reach Git. If it can't, ArgoCD stops working.10
  • It has no native promotion UI or approval workflow. Manual approval is approximated by setting sync mode to manual on production Applications, requiring a human to click "Sync" in the ArgoCD UI or trigger it via CLI.

3. The Case for Pure GitOps (When It Works)

For teams with a single cloud provider, internet-accessible clusters, a unified AWS account model, and primarily Kubernetes workloads, the pure GitOps approach is elegant and low-overhead.

The Promotion Flow

CI builds image → pushes to ECR → updates image tag in infra-configs/overlays/dev/  
ArgoCD syncs dev → PostSync health check passes  
CI opens PR on infra-configs/overlays/staging/ → auto-merge after CI  
ArgoCD syncs staging → PostSync health check passes  
CI opens PR on infra-configs/overlays/prod/ → requires human approval  
Human approves → ArgoCD manual sync triggered  
flowchart TD  
    ci["CI: Build + Push Image"] --> dev_pr["Update image tag\nin overlays/dev/"]
    dev_pr --> argo_dev["ArgoCD syncs Dev"]
    argo_dev --> health_dev{"Health check?"}
    health_dev -->|pass| stg_pr["PR to overlays/staging/"]
    health_dev -->|fail| rollback_dev["SyncFail hook reverts"]
    stg_pr --> argo_stg["ArgoCD syncs Staging"]
    argo_stg --> health_stg{"Health check?"}
    health_stg -->|pass| prod_pr["PR to overlays/prod/"]
    health_stg -->|fail| rollback_stg["SyncFail hook reverts"]
    prod_pr --> approval{"Human Approval"}
    approval -->|approved| argo_prod["ArgoCD manual sync Prod"]

Every step is a Git commit. Every commit is auditable. GitHub Environment protection rules gate production deployment behind named reviewers. The blast radius of any failure is bounded to the environment that was synced.17

ApplicationSet for Multi-Cluster Fan-Out

When the same application needs to deploy to many clusters (e.g., regional deployments), ArgoCD's ApplicationSet controller generates one Application per cluster from a template and a generator. The template uses parameter placeholders ({{cluster}}, {{environment}}, {{region}}), and the generator provides a list of parameter sets.1819

apiVersion: argoproj.io/v1alpha1  
kind: ApplicationSet  
metadata:  
  name: myapp-appset
  namespace: argocd
spec:  
  generators:
    - list:
        elements:
          - cluster: dev-us-east
            url: https://dev-eks-api.internal
            environment: dev
          - cluster: prod-us-east
            url: https://prod-eks-api.internal
            environment: prod
  template:
    metadata:
      name: '{{cluster}}-myapp'
    spec:
      source:
        repoURL: https://ghes.internal/udx/infra-configs
        targetRevision: HEAD
        path: overlays/{{environment}}
      destination:
        server: '{{url}}'
        namespace: myapp
      syncPolicy:
        automated:
          prune: true
          selfHeal: true

ApplicationSet doesn't do promotion either — it just ensures every generated Application is reconciling to the right overlay for its environment.2021

Rollback via SyncFail Hooks

ArgoCD supports resource hooks that fire at specific points in the sync lifecycle. A SyncFail hook runs if the sync itself fails — useful for automatically reverting the Git commit that caused the failure:22

apiVersion: batch/v1  
kind: Job  
metadata:  
  name: rollback-on-syncfail
  annotations:
    argocd.argoproj.io/hook: SyncFail
    argocd.argoproj.io/hook-delete-policy: HookSucceeded
spec:  
  template:
    spec:
      containers:
        - name: rollback
          image: alpine/git
          command:
            - sh
            - -c
            - |
              git revert HEAD --no-edit
              git push origin HEAD
      restartPolicy: Never

Combined with PostSync health check Jobs, this creates a fully automated rollback loop: sync, test, revert if broken — no human needed.


4. Where GitOps Breaks Down: The Multi-Account, Multi-Cloud Reality

The pure GitOps model makes an assumption that is easy to miss: every cluster can reach Git over HTTPS. In practice, this is often not true.

The Separate AWS Account Problem

For security and compliance, dev and prod typically live in separate AWS accounts with no VPC peering and no shared network path. Each cluster needs its own outbound path to Git. This is solvable — each ArgoCD instance independently reaches GitHub.com or GHES via its own outbound HTTPS — but it creates a different problem: if both clusters read the same Git repo, you're sharing a deploy key across account boundaries. A compromised deploy key in dev can now read prod manifests.3

The correct solution is separate Git repos per environment, each with a scoped deploy key. But now you've added repo-per-environment management overhead, cross-repo CI permissions, and an extra layer for developers to navigate when things go wrong.

The Air-Gapped Cluster Problem

GovCloud environments, client-managed clusters, and classified systems often have no outbound internet access at all — or outbound access tightly restricted to known endpoints. ArgoCD inside such a cluster simply cannot function as a Git pull mechanism.10

Workarounds exist — GHES inside a peered VPC, internal Git mirrors, or ArgoCD's newer Agent Mode (where a lightweight agent inside the cluster polls an external ArgoCD control plane, similar to Octopus's outbound-only model). Each adds operational complexity: another service to run, another failure point, another piece of infrastructure to maintain per environment. ArgoCD Agent Mode in particular narrows the gap with Octopus for pure-Kubernetes workloads, though it remains limited to Kubernetes targets and lacks Octopus's first-class release and promotion model.

The Cross-Cloud Problem

VPC peering doesn't cross cloud providers. An Azure AKS cluster cannot peer with an AWS VPC. Deploying to Azure AKS alongside AWS EKS in a single ArgoCD-based pipeline requires either a SaaS ArgoCD control plane, a complex overlay of VPNs and tunnel infrastructure, or a separate ArgoCD instance in Azure with its own repo access — each of which adds cost and operational overhead.

The Non-Kubernetes Problem

ArgoCD is Kubernetes-native. RDS provisioning, IAM role creation, Route53 records, Lambda functions, Windows services, SQL migrations — none of these are first-class ArgoCD targets. Crossplane can bring some AWS resources into the Kubernetes API surface, but it adds substantial complexity and is not universally applicable.20


5. The Hidden Complexity Multiplier

Each GitOps workaround for a new environment type adds a multiplier to the operational surface:

  • New air-gapped cluster → new internal Git mirror to operate
  • New AWS account → new deploy key, new repo, new CI token, new ArgoCD instance
  • New cloud provider → new VPN or tunnel, new ArgoCD instance, new cluster registration
  • New non-K8s resource → new Terraform pipeline, separate from ArgoCD, with its own promotion model

By the time an organization has 5+ environments across 2+ clouds with mixed K8s and non-K8s workloads, the "simple GitOps model" has become a distributed system of its own — one that a new team member cannot reason about without a detailed architecture diagram.


6. The Octopus Deploy Model: One Release, N Targets

Octopus Deploy approaches the same problem from the opposite direction. Instead of making every target pull from a shared Git repo, it models a release as a first-class object and pushes that release to targets via lightweight outbound-only agents.1112

Core Concepts

  • Release — a versioned, immutable snapshot of a deployment: the image tag, variable snapshot, and process definition at a point in time. Release 1.4.2 is the same artifact everywhere it goes.
  • Environment — a named deployment target or group of targets (dev, staging, prod-us, prod-eu). Environments have their own variable values, approval requirements, and retention policies.
  • Lifecycle — the ordered progression of environments a release must pass through. Octopus enforces that a release cannot reach prod without first completing dev and staging.
  • Kubernetes Agent / Tentacle — lightweight agents installed inside the target. For Kubernetes targets (EKS, AKS, GKE), Octopus uses the Kubernetes Agent — a Helm-installed pod that polls Octopus Server outbound over HTTPS (port 10943). For VM and Windows targets, Octopus uses the Tentacle agent. Both share the same outbound-only connectivity model: the target initiates the connection, no inbound ports required, no VPN, no Git access needed at the target.2312
  • Variables — named values scoped per project, environment, or target. Developers define variable templates (#{DB_HOST}, #{API_KEY}); platform engineers fill in values per environment. The release carries the variable snapshot; the target never needs to reach a secrets store independently.

The Tentacle Model Solves the Network Problem

Because the Tentacle initiates the connection outbound from the target to Octopus Server, it works in any network topology:

Octopus Server (your control plane — hosted or self-managed)  
  │
  ├── Tentacle ← dev EKS (AWS Account A, outbound HTTPS)
  ├── Tentacle ← prod EKS (AWS Account B, outbound HTTPS, no peering needed)
  ├── Tentacle ← air-gapped GovCloud EKS (outbound HTTPS to Octopus only)
  ├── Tentacle ← Azure AKS (cross-cloud, outbound HTTPS)
  └── Tentacle ← on-prem Windows server (non-K8s workload)
flowchart TD  
    subgraph octopus["Octopus Server"]
        release["Release 1.4.2"]
    end

    subgraph acctA["AWS Account A"]
        dev["Dev EKS\nTentacle"]
    end

    subgraph acctB["AWS Account B"]
        prod["Prod EKS\nTentacle"]
    end

    subgraph gov["GovCloud"]
        airgap["Air-Gapped EKS\nTentacle"]
    end

    subgraph azure["Azure"]
        aks["AKS Cluster\nTentacle"]
    end

    subgraph onprem["On-Premises"]
        win["Windows Server\nTentacle"]
    end

    dev -->|outbound HTTPS| octopus
    prod -->|outbound HTTPS| octopus
    airgap -->|outbound HTTPS| octopus
    aks -->|outbound HTTPS| octopus
    win -->|outbound HTTPS| octopus

Each target only needs outbound HTTPS to Octopus Server. The targets never talk to each other. The accounts don't need to peer. The clusters don't need Git access.2312

How Octopus Deploys to EKS

Octopus has native Kubernetes step types that use the Tentacle's in-cluster service account to apply Helm charts, raw manifests, or Kustomize outputs. The deployment process:

  1. CI builds and pushes image to ECR, creates Octopus release via API
  2. Octopus lifecycle auto-deploys to dev environment
  3. Deployment runs Kubernetes deployment step via Tentacle
  4. Health check step polls rollout status
  5. On success, Octopus advances release to staging (auto or gated)
  6. On staging success, release is eligible for prod — blocked by required manual approver
  7. Approver clicks "Deploy" in Octopus dashboard
  8. Octopus deploys to prod via prod Tentacle

Every step, every approval, every variable snapshot is logged in Octopus's release history with timestamps and user attribution.11


7. Variables, Secrets, and Developer Experience

The Variable Model

Octopus's variable scoping is the cleanest solution to the "same config, different values per environment" problem without requiring separate config files or repos per environment.11

Variable: DB_HOST  
  Value: dev-postgres.internal        → Scope: Environment = Dev
  Value: staging-postgres.internal    → Scope: Environment = Staging
  Value: prod-aurora.cluster.aws      → Scope: Environment = Production

Variable: FEATURE_FLAG_NEW_UI  
  Value: true                         → Scope: Environment = Dev, Staging
  Value: false                        → Scope: Environment = Production

The release carries the variable snapshot for its target environment. The application reads #{DB_HOST} and gets the right value automatically. No overlay files. No per-env secrets manager paths to manage in config.

CMMC note: Octopus variables live in Octopus's internal database, not in Git. For Configuration Management compliance (CM.L2-3.4.1), rely on Octopus's built-in audit log and per-release variable snapshots as your evidence trail. Octopus's Config-as-Code feature version-controls the deployment process in Git, but variable values remain in the Octopus DB. If your C3PAO requires Git-tracked configuration values specifically, you may need to supplement with ESO-backed secrets and Kustomize overlays for non-secret config.

For secrets specifically, Octopus integrates with AWS Secrets Manager and HashiCorp Vault as variable value sources — the variable is defined in Octopus, but its value is resolved from the external store at deploy time.7

Developer Local Environment

Developers don't interact with Octopus for local development. The local dev workflow remains independent:

myapp/  
  docker-compose.yml    ← local Postgres, Redis, etc.
  skaffold.yaml         ← points at local Kubernetes overlay
  k8s/
    base/               ← same manifests Octopus deploys
    overlays/local/     ← gitignored patches for laptop dev

skaffold dev gives hot-reload against a local cluster. The developer defines env vars in overlays/local/configmap-patch.yaml and a gitignored secret-patch.yaml pointing at local values or a personal dev Secrets Manager path.1314

The only time a developer touches Octopus is to watch their release progress through environments or to request a rollback. They never open an infra PR to add a new environment — Octopus manages that.

Requesting New Infrastructure (RDS, SQS, etc.)

When a developer needs a new cloud resource, they open a PR on the Terraform repo (separate from the app repo and the infra-configs repo). GitHub Actions runs terraform plan and posts the diff as a PR comment. A platform engineer reviews and merges. terraform apply provisions the resource. The endpoint goes into Secrets Manager. Octopus resolves it via its variable/secret integration at next deploy. The developer's app reads it as a normal env var — no code change required.78


8. Dependabot in This Model

Dependabot works identically regardless of whether Octopus or ArgoCD handles deployment. It watches repos and opens PRs — the deployment mechanism is downstream of the merge.

What Dependabot Watches

# .github/dependabot.yml
version: 2  
updates:  
  - package-ecosystem: "docker"
    directory: "/"
    schedule:
      interval: "daily"
      time: "02:00"
      timezone: "America/New_York"
  - package-ecosystem: "terraform"
    directory: "/terraform"
    schedule:
      interval: "weekly"
  - package-ecosystem: "github-actions"
    directory: "/"
    schedule:
      interval: "weekly"

Nightly at 2am: Dependabot opens PRs for any outdated base images, Terraform provider versions, and pinned GitHub Actions SHAs. CI runs immediately. For patch and minor security updates, auto-merge fires if all checks pass. For major version bumps, the PR waits for human review.1

Auto-Merge as the First Domino

2:00am  Dependabot opens PR: node:22.14-alpine → node:22.15-alpine (patch update)  
2:05am  CI: docker build, smoke test (docker run IMAGE node --version), Trivy scan  
2:15am  All checks pass → auto-merge  
2:16am  Post-merge workflow: cosign sign, push to ECR, create Octopus release  
2:20am  Octopus: auto-deploy to dev, Tentacle applies, health check runs  
2:30am  Dev health confirmed → Octopus advances to staging (auto or gated)  
9:00am  Dev team arrives: staging already running patched image, awaiting prod approval  

The Dependabot PR is just the trigger. Everything downstream is automated: build, scan, sign, push, deploy, promote. The human only makes a decision at the prod gate — and that decision is informed by two environments already running the change successfully overnight.2425

Security vs. Version Updates

Dependabot distinguishes between security-flagged updates (a known CVE in the current version) and routine version updates. Security updates skip the normal cooldown window and auto-merge immediately if CI passes — more exposure time is more risk. Version updates wait a configurable cooldown (5 days is the common default) so supply chain attacks on newly published versions have time to be detected by the community before landing in your estate.1


9. The CI/CD Pipeline in Detail

Regardless of whether Octopus or ArgoCD handles the deploy leg, the CI pipeline is the same. GitHub Actions runs on every PR and every merge to main.

On PR Open (Gate 1: Does It Even Build?)

name: Docker Ops  
on:  
  pull_request:
    branches: [main]

jobs:  
  build-and-scan:
    runs-on: ubuntu-latest
    permissions:
      id-token: write    # OIDC for ECR auth — no static keys
      contents: read
      security-events: write

    steps:
      - uses: actions/checkout@v4

      - name: Configure AWS credentials (OIDC)
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: arn:aws:iam::${{ vars.AWS_ACCOUNT_ID }}:role/github-actions-ecr
          aws-region: us-east-1

      - name: Login to ECR
        id: ecr
        uses: aws-actions/amazon-ecr-login@v2

      - name: Build image
        run: |
          docker build -t $ECR_REGISTRY/myapp:pr-${{ github.event.number }} .

      - name: Smoke test — verify binary initializes
        run: |
          docker run --rm $ECR_REGISTRY/myapp:pr-${{ github.event.number }} node --version
          docker run --rm $ECR_REGISTRY/myapp:pr-${{ github.event.number }} node -e "require('./src/index')"

      - name: Trivy scan
        uses: aquasecurity/trivy-action@0.30.0
        with:
          image-ref: $ECR_REGISTRY/myapp:pr-${{ github.event.number }}
          severity: CRITICAL,HIGH
          exit-code: '1'    # fail the build on critical/high CVEs
          format: sarif
          output: trivy-results.sarif

      - name: Upload SARIF to Security tab
        uses: github/codeql-action/upload-sarif@v3
        with:
          sarif_file: trivy-results.sarif

The PR cannot merge if the image doesn't build, the binary doesn't initialize, or the Trivy scan finds CRITICAL/HIGH CVEs. This is Gate 1.

Note: The PR gate builds and scans an image but does not push it to ECR. The post-merge workflow rebuilds from the same commit. This means the scanned image and the deployed image are technically different builds. For maximum supply chain integrity, you could push the PR image to ECR with a temporary tag (e.g., pr-42), then retag and promote on merge instead of rebuilding. The approach shown here prioritizes simplicity — the commit SHA is identical, and the post-merge image gets its own Cosign signature and SBOM.

On Merge to Main (Gate 2: Sign, Push, Release)

name: Release  
on:  
  push:
    branches: [main]

jobs:  
  release:
    runs-on: ubuntu-latest
    permissions:
      id-token: write
      contents: read

    steps:
      - name: Build and push to ECR
        id: build
        run: |
          IMAGE_TAG=${{ github.sha }}
          docker build -t $ECR_REGISTRY/myapp:$IMAGE_TAG .
          docker push $ECR_REGISTRY/myapp:$IMAGE_TAG
          echo "digest=$(docker inspect --format='{{index .RepoDigests 0}}' $ECR_REGISTRY/myapp:$IMAGE_TAG)" >> $GITHUB_OUTPUT

      - name: Install Cosign
        uses: sigstore/cosign-installer@v3

      - name: Sign image (keyless OIDC)
        run: |
          cosign sign --yes ${{ steps.build.outputs.digest }}

      - name: Generate SBOM
        uses: anchore/sbom-action@v0
        with:
          image: $ECR_REGISTRY/myapp:${{ github.sha }}

      - name: Create Octopus release
        uses: OctopusDeploy/create-release-action@v3
        with:
          api_key: ${{ secrets.OCTOPUS_API_KEY }}
          server: ${{ vars.OCTOPUS_SERVER }}
          project: myapp
          release_number: ${{ github.sha }}
          packages: |
            myapp:${{ github.sha }}

The image is signed with Cosign using GitHub Actions OIDC — no private key stored anywhere, the signature is cryptographically tied to the specific Actions workflow that ran it. The SBOM is attached as an attestation. Octopus is given the release immediately after push.2627

Air-gapped caveat: Keyless Cosign verification requires the verifier (e.g., Kyverno in the cluster) to reach the public Sigstore infrastructure (rekor.sigstore.dev, fulcio.sigstore.dev) over HTTPS. In air-gapped GovCloud clusters, this breaks unless you run a private Sigstore stack (private TUF mirror, private Rekor, private Fulcio). Plan for this if your compliance environment requires both keyless signing and network isolation.


10. CMMC Level 2 Compliance Alignment

Every component in this pipeline maps to a CMMC Level 2 practice requirement.

Audit Logging (AU.L2-3.3.1)

Every image push and pull generates a CloudTrail event in ECR. Every GitHub Actions run is logged with the triggering user, PR, and commit SHA. Every Octopus deployment is logged with the deploying user, release version, and target environment. Every Cosign signature is recorded in the public Rekor transparency log. Together these satisfy the requirement to create and retain system audit logs sufficient to enable monitoring, analysis, investigation, and reporting of unlawful or unauthorized system activity.2829

Malicious Code Protection (SI.L2-3.14.2)

Trivy scans every image on every PR and every merge. Kyverno (if running inside the cluster) enforces that only images signed by the specific Actions workflow OIDC identity can be admitted — an unsigned image or one signed by a different identity is rejected at the Kubernetes admission controller before a pod can start. This satisfies the requirement to employ malicious code protection mechanisms at appropriate locations within organizational systems.30313233

Configuration Management (CM.L2-3.4.1, CM.L2-3.4.2)

Every deployment is a version-controlled manifest change. No ad-hoc kubectl apply. No direct SSH to nodes. All configuration is in Git with full history. Octopus's variable snapshots capture the exact configuration state at deployment time, providing evidence that baseline configurations are established, maintained, and changes are controlled.23

Access Control (AC.L2-3.1.1, AC.L2-3.1.2)

Developers never have credentials for staging or production environments. They cannot directly deploy to those environments. The Octopus lifecycle enforces that prod deployments require a named approver. GitHub fine-grained PATs scope CI tokens to exactly the repos and permissions required. IAM roles for ECR and Secrets Manager are scoped per environment via IRSA — the dev EKS node role cannot access prod secrets.78

Supply Chain Risk Management (SR.L2-3.17.1)

The Cosign signature, SBOM attestation, and SLSA provenance together form a verifiable chain of custody: this image was built from this commit, by this workflow, from this repo, and has not been tampered with since. ECR immutable tags prevent an image from being overwritten after it's deployed. This evidence package is what a C3PAO assessor needs for supply chain risk management controls.3435


11. Choosing ArgoCD vs. Octopus: The Decision Framework

Neither tool is universally correct. The right answer depends on your actual environment topology.

| Constraint | ArgoCD | Octopus | |---|---|---| | All clusters can reach Git (outbound HTTPS) | Works perfectly | Adds overhead | | Separate AWS accounts, no peering | Works (each cluster reaches Git independently) | Works (Tentacle outbound) | | Air-gapped cluster, no outbound internet | Does not work | Works (Tentacle outbound only) | | Cross-cloud (AWS + Azure) | Complex (VPN or separate instances) | Works natively | | Non-Kubernetes workloads (VMs, Windows) | Does not apply | First-class support | | Named release object with promotion history | Not native (approximate with Git tags) | First-class | | Drift detection and self-healing | First-class | Not native | | Developer-operated personal namespaces | First-class (ApplicationSet) | More overhead | | CMMC audit trail | Git history + CloudTrail | Octopus audit log + CloudTrail | | Single tool for all targets | No (breaks at air-gap/cross-cloud) | Yes |

flowchart TD  
    A{"All clusters\nreach Git?"} -->|yes| B{"Need drift\ndetection?"}
    B -->|yes| C["ArgoCD"]
    B -->|no| D{"Need named releases\n+ promotion?"}
    D -->|yes| E["Octopus Deploy"]
    D -->|no| C
    A -->|no| F{"Air-gapped or\ncross-cloud?"}
    F -->|yes| E
    F -->|no| G{"Non-K8s\nworkloads?"}
    G -->|yes| E
    G -->|no| H["Hybrid:\nArgoCD + Octopus"]

The Hybrid Model

For teams with a mixed estate — some clusters that can reach Git, some that can't — the most pragmatic architecture is:

  • ArgoCD for clusters with reliable Git connectivity and where drift detection matters (internal dev/staging clusters, well-networked prod clusters)
  • Octopus for everything else (air-gapped, cross-cloud, non-K8s), with Octopus optionally committing to Git repos that ArgoCD watches for the GitOps-capable clusters

Octopus itself ships native ArgoCD integration as of 2026.1: Octopus can commit to a Git repo and wait for ArgoCD Application health before advancing the lifecycle. This makes the hybrid model explicit and manageable rather than two independent systems.2312

The Single-Tool Answer

If operational simplicity matters more than maximizing GitOps principles — and for most production engineering teams, it should — Octopus Deploy is the single-tool answer for a multi-environment, multi-account, multi-cloud estate. The cognitive overhead of managing per-environment repos, deploy keys, ArgoCD instances, Git mirror services, and peering exceptions for each new environment type compounds quickly. One release object, one tool, N targets via outbound-only agents is a model that scales without multiplying operational complexity.


Putting It All Together: The Full Pipeline

Developer pushes to app repo  
  OR
Dependabot opens PR (nightly, 2am)  
       │
       ▼
GitHub Actions (PR gate)  
  ├─ docker build
  ├─ smoke test: docker run IMAGE node --version
  ├─ Trivy scan → SARIF to GitHub Security tab
  └─ Required checks → branch protection blocks merge if any fail
       │
  PR auto-merged (Dependabot patch/minor) OR human merges
       │
       ▼
GitHub Actions (post-merge)  
  ├─ docker build + push to ECR (OIDC, no static keys)
  ├─ cosign sign (keyless, OIDC-bound to workflow identity)
  ├─ SBOM + SLSA provenance attached as attestation
  └─ Create Octopus release → release 1.4.2 born
       │
       ▼
Octopus Deploy  
  ├─ Auto-deploy to Dev EKS (Tentacle, AWS Account A)
  │    ├─ Kubernetes deployment step (Helm/Kustomize via Tentacle)
  │    ├─ Variable injection: DB_HOST=dev-postgres, FEATURE_FLAGS=all-on
  │    └─ Health check step: rollout status + smoke test endpoint
  │
  ├─ Dev health confirmed → auto-advance to Staging
  │    ├─ Variable injection: DB_HOST=staging-aurora, FEATURE_FLAGS=partial
  │    └─ Integration test suite runs inside cluster via ARC runner
  │
  ├─ Staging health confirmed → eligible for Production
  │    └─ BLOCKED: requires named approver (platform lead)
  │
  └─ Approver clicks Deploy in Octopus UI
       ├─ Variable injection: DB_HOST=prod-aurora, FEATURE_FLAGS=conservative
       ├─ Deploy to Prod EKS (Tentacle, AWS Account B — no peering needed)
       └─ Deploy to GovCloud EKS (Tentacle, air-gapped — no Git access needed)
            │
            ▼
         Release 1.4.2 marked Complete
         Full audit log: who approved, when, what variables, what cluster
flowchart TD  
    dev["Developer Push\nor Dependabot PR"] --> ci_gate["GitHub Actions\nBuild + Scan + Sign"]
    ci_gate --> ecr["Push to ECR\nCosign + SBOM"]
    ecr --> oct_release["Octopus Release Created"]
    oct_release --> deploy_dev["Auto-deploy Dev\nvia Tentacle"]
    deploy_dev --> health_dev{"Dev Healthy?"}
    health_dev -->|yes| deploy_stg["Advance to Staging"]
    health_dev -->|no| alert["Alert Team"]
    deploy_stg --> health_stg{"Staging Healthy?"}
    health_stg -->|yes| gate["Prod Gate:\nManual Approval"]
    health_stg -->|no| alert
    gate -->|approved| deploy_prod["Deploy Prod + GovCloud\nvia Tentacles"]
    deploy_prod --> done["Release Complete\nFull Audit Trail"]

The developer wrote code. CI validated it. Octopus carried it through every environment with the right config for each. No developer ever had prod credentials. No cluster needed to reach Git or another cluster. The C3PAO auditor has a complete evidence trail from commit to production deployment.


References

  1. About Grouped Security... - Dependabot can fix vulnerable dependencies for you by raising pull requests with security updates.

  2. Introducing GitHub Actions runner scale set client · community - The client is a standalone Go-based module that lets you build custom autoscaling solutions for GitH...

  3. Sharing Amazon ECR repositories with multiple accounts using ... - In this blog, we walk through an example of performing a blue/green deployment from a multi-account ...

  4. How to implement Kustomize overlays for environment-specific ... - Master Kustomize overlays to manage environment-specific configurations across development, staging,...

  5. Overlays in Kustomize - Kustomize is a tool for managing and customizing Kubernetes resource configurations. It allows you t...

  6. How to Use Helm Values Files for Multi-Environment Deployments - Master Helm values files to manage dev, staging, and production configurations with values file laye...

  7. External Secret Operators (ESO) with HashiCorp Vault - Earthly Blog - External secret operators (ESO) is a Kubernetes operator that allows you to use secrets from central...

  8. How To Access Vault Secrets Inside of Kubernetes Using ... - Secrets in Kubernetes can be used in pods to avoid keeping connection strings and other sensitive da...

  9. Best Practices - Argo CD - Declarative GitOps CD for Kubernetes - Using a separate Git repository to hold your Kubernetes manifests, keeping the config separate from ...

  10. Automated Sync Policy¶

  11. GitOps Environment Automation And Promotion: A Practical Guide - Merging: After approval, the PR is merged, triggering the GitOps pipeline to apply the changes to th...

  12. Combining GitOps And Continuous Delivery With Argo CD And ... - To connect Argo CD applications to Octopus projects, you need to install the Octopus Kubernetes Agen...

  13. How to Use Skaffold for Kubernetes Development Workflow - Master Skaffold for streamlined Kubernetes development with automatic builds, deployments, and hot-r...

  14. How to Simplify Your Local Kubernetes Development With Skaffold - You can iterate on your application source code locally then deploy to local Kubernetes clusters. Sk...

  15. Understanding Argo CD: Kubernetes GitOps Made Simple - Codefresh - Argo CD can automatically apply any change to the desired state in the Git repository to the target ...

  16. How ArgoCD Compares Live State vs Desired State - Desired state is the output of ArgoCD's manifest generation pipeline. It starts with your Git reposi...

  17. Managing environments for deployment - GitHub Docs - You can create environments and secure those environments with deployment protection rules. A job th...

  18. Generating Applications with ApplicationSet - Argo CD - The ApplicationSet controller adds Application automation and seeks to improve multi-cluster support...

  19. Argo CD ApplicationSet Controller - GitHub - The ApplicationSet controller manages multiple Argo CD Applications as a single ApplicationSet unit,...

  20. ArgoCD ApplicationSet: Multi-Cluster Deployment Made Easy - Argo CD is a tool for deploying applications in a declarative manner, using Git as the source of tru...

  21. How to Handle ArgoCD Application Sets - OneUptime - Learn how to use ArgoCD ApplicationSets to manage multiple applications from a single definition wit...

  22. How to Combine Sync Waves and Hooks for Complex Deployments - Learn how to combine ArgoCD sync waves and hooks to orchestrate complex multi-phase deployments with...

  23. Verified Argo CD Deployments | Octopus blog - With Argo CD integration, Octopus lets teams combine the strengths of GitOps and Continuous Delivery...

  24. Dependabot with auto-merge - Nais Docs - By completing this guide, Dependabot will automatically fix your insecure or outdated dependencies, ...

  25. Enhancing Dependabot Auto-Merging: A Smarter, More Secure ... - By leveraging GitHub Rulesets and a Webhook-Triggered GitHub App, auto-merging Dependabot PRs is now...

  26. Zero-friction “keyless signing” with Github Actions - Chainguard - Secure your GitHub Actions workflows with keyless signing. Enhance security, eliminate key managemen...

  27. Keyless Signing of Container Images using GitHub Actions - Cosign: a tool that signs software artifacts, this brings trust and provenance to the software and h...

  28. How to Match Vulnerability Findings in AWS Inspector and ECR ... - Compliance and Audit: Maintain a compliant and auditable environment by having a clear view of your ...

  29. AU.L2-3.3.1[c]: Verify Your Systems Log the Security Events You've ... - • Pre-configuring its secure enclave to log all CMMC-required audit events • Offering tools to revie...

  30. How to Implement Image Policy Enforcement with Kyverno Verify ... - Master Kyverno verify images rules to enforce image signature verification and source policies.

  31. How to Configure Trivy Severity Filtering - OneUptime - Trivy uses five severity levels based on CVSS scores and vulnerability databases. Severity, CVSS Sco...

  32. Policy for PolicyExceptions - Kyverno - A PolicyException grants the applicable resource(s) or subject(s) the ability to bypass an existing ...

  33. Practice SI.L2-3.14.2 Details - CMMC Toolkit Wiki - SECURITY REQUIREMENT. Provide protection from malicious code at appropriate locations within organiz...

  34. Why Software Bill of Materials (SBOM) Require Attestations - A software attestation is a trust mechanism that allows a verifier (ie, a customer) to independently...

  35. Lab: Generating and Verifying SLSA Provenance for Container Images - SLSA (Supply-chain Levels for Software Artifacts) provenance is a verifiable record that describes h...