Skip to content

Architecture

Pipeline

The dark and twisted ritual underlying the conversion. The same Helm charts used for Kubernetes are rendered into standard K8s manifests, then converted to compose:

Helm charts (helmfile / helm / kustomize)
    |  helmfile template / helm template / kustomize build
    v
K8s manifests (Deployments, Services, ConfigMaps, Secrets, Ingress...)
    |  helmfile2compose.py (distribution) / dekube.py (bare core)
    v
compose.yml + reverse proxy config + configmaps/ + secrets/

A dedicated helmfile environment (e.g. compose) typically disables K8s-only infrastructure (cert-manager, ingress controller, reflector) and adjusts defaults for compose.

For the internal package structure, module layout, and build system, see dekube-engine.

Converter dispatch

Manifests are classified by kind and dispatched to converter classes. Each converter handles one or more K8s kinds and returns a ConverterResult (ingress entries only) or ProviderResult (compose services + ingress entries).

kinds is immutable. The Converter base class declares kinds: tuple = (). Extensions override it as a class attribute — this replaces the default entirely, which is the intended pattern. The base default is an empty tuple (not a list) to prevent accidental mutation: if an extension forgot to override kinds and called self.kinds.append(...), a mutable default would silently corrupt the shared class attribute across all converter instances. A tuple makes this fail loudly.

K8s manifests
    | parse + classify by kind
    | dispatch to converters (built-in or external, same interface)
    v
compose.yml + reverse proxy config

The Eight Monks + bundled transforms (distribution)

The bare dekube-engine has no built-in converters — all registries are empty. The helmfile2compose distribution bundles 9 extensions via _auto_register():

Each lives in its own repo, referenced in distribution.json. The distribution assembles them at build time via dekube-manager.

External extensions (providers and converters)

Loaded via --extensions-dir. Each .py file (or one-level subdirectory with .py files) is scanned for classes with kinds and convert(). Providers (keycloak, servicemonitor) produce compose services; converters (cert-manager, trust-manager) produce synthetic resources. Both share the same code interface and are sorted by priority (lower = earlier; default 1000 for Converter, 50 for IndexerConverter, 500 for Provider) and registered into the dispatch loop.

.dekube/extensions/
├── keycloak.py                        # flat file
├── dekube-converter-cert-manager/      # cloned repo
│   ├── cert_manager.py                # converter class
│   └── requirements.txt

See Writing converters for the full guide.

External transforms

Loaded from the same --extensions-dir as converters. The loader distinguishes them automatically: classes with transform() and no kinds are transforms. Sorted by priority (lower = earlier, default 1000). Run after all converters, aliases, and hostname truncation, but before user overrides — so overrides always have the final say.

See Writing transforms for the full guide.

Ingress rewriters

Ingress annotation handling is dispatched through IngressRewriter classes. Each rewriter targets a specific ingress controller (identified by ingressClassName or annotation prefix) and translates its annotations into ingress entry dicts consumed by the configured IngressProvider.

The built-in HAProxyRewriter handles haproxy and empty/absent ingress classes, plus any manifest with haproxy.org/* annotations. It also acts as the default fallback when no ingressClassName is set — if no external rewriter matches first, HAProxy claims the manifest.

External rewriters are loaded from --extensions-dir alongside converters and transforms. A rewriter with the same name as a built-in one replaces it. Dispatch order: external rewriters first, then built-in.

See Writing rewriters for the full guide.

What it converts

K8s kind Compose equivalent
DaemonSet / Deployment / StatefulSet services: (image, env, command, volumes, ports). Init containers become separate services with restart: on-failure; main service uses depends_on with condition: service_completed_successfully. Sidecar containers become separate services with network_mode: container:<main> (shared network namespace). Shared emptyDir volumes promoted to named Compose volumes by the emptydir transform. Resource limits → deploy.resources.limits. Readiness/liveness probes → healthcheck. DaemonSet treated identically to Deployment (single-machine tool).
Job services: with restart: on-failure (migrations, superuser creation). Init containers converted the same way.
ConfigMap / Secret Resolved inline into environment: + generated as files for volume mounts
Service (ClusterIP) Network aliases (FQDN variants resolve via compose DNS)
Service (ExternalName) Resolved through alias chain (e.g. docs-media -> minio)
Service (NodePort / LoadBalancer) ports: mapping
Ingress Reverse proxy service + config file, dispatched to ingress rewriters by ingressClassName. The IngressProvider (Caddy by default) consumes rewriter output and produces the proxy service. Path-rewrite annotations, backend SSL, and extra_directives (rate-limit, auth, headers) are passed through as provider-agnostic entry dicts.
PVC / volumeClaimTemplates Host-path bind mounts (auto-registered in dekube.yaml on first run only)
securityContext (runAsUser) Auto-generated fix-permissions service (chown -R <uid>) for non-root bind mounts (via the fix-permissions transform)

Not converted or silently ignored

CronJobs, HPA, PDB emit warnings. RBAC, NetworkPolicies, CRDs (unless claimed by an extension), and other cluster-only kinds are silently skipped. Unknown kinds trigger a warning. Resource limits are translated to deploy.resources.limits; probes are converted to healthcheck. See Limitations for the full breakdown and rationale.

Processing pipeline

Thirteen steps. Each one locally reasonable. Together, they flatten a distributed system into a YAML file and a prayer.

  1. Parse manifests — recursive .yaml scan, multi-doc YAML split, classify by kind. Malformed YAML files are skipped with a warning.
  2. Index lookup data — ConfigMaps, Secrets, Services indexed for resolution during conversion.
  3. Build alias map — K8s Service name -> workload name mapping. ExternalName services resolved through chain.
  4. Build port map — K8s Service port -> container port resolution (named ports resolved via container spec). When the Service is missing from manifests, named ports fall back to a well-known port table (http → 80, https → 443, grpc → 50051).
  5. Track PVCs — from both regular volumes and volumeClaimTemplates. On first run, auto-register in config for host_path mapping. On subsequent runs, track only (config is read-only after creation).
  6. First-run init — auto-exclude K8s-only workloads, generate default config, write dekube.yaml. On subsequent runs: detect stale volume entries (config volumes not referenced by any PVC).
  7. Dispatch to converters — each converter receives its kind's manifests + a ConvertContext. Extensions run in priority order (lower first), then built-in converters. Within IngressProvider, each Ingress manifest is dispatched to the first matching IngressRewriter (by ingressClassName or annotation prefix).
  8. Post-process env — port remapping and replacements applied to all service environments (idempotent — catches extension-produced services).
  9. Build network aliases — for each K8s Service, add FQDN aliases (svc.ns.svc.cluster.local, svc.ns.svc, svc.ns) + short alias to the compose service's networks.default.aliases. FQDNs resolve natively via compose DNS — no hostname rewriting needed.
  10. Hostname truncation — services with names >63 chars get explicit hostname:.
  11. Run transforms — post-processing hooks (if loaded). Transforms mutate compose_services and ingress_entries in place.
  12. Apply overrides — deep merge from config overrides: and services: sections. Runs last so user overrides always win over transform output.
  13. Write outputcompose.yml, reverse proxy config (via IngressProvider), config/secret files. The temple is rendered. The architect goes to sleep. The architect does not sleep well.

Automatic rewrites

These happen transparently during conversion:

  • Network aliases — each compose service gets networks.default.aliases with all K8s FQDN variants (svc.ns.svc.cluster.local, svc.ns.svc, svc.ns). FQDNs in env vars, ConfigMaps, and reverse proxy upstreams resolve natively via compose DNS — no hostname rewriting needed. This preserves cert SANs for HTTPS.
  • Service aliases — K8s Services whose name differs from the workload are resolved. ExternalName services followed through the chain. The short K8s Service name is added as a network alias on the compose service.
  • Port remapping — K8s Service port -> container port in URLs. http://svc (implicit port 80) and http://svc:80 both rewritten to http://svc:8080 if the container listens on 8080. FQDN variants (svc.ns.svc.cluster.local:80) are also matched.
  • Kubelet $(VAR)$(VAR_NAME) in container command/args resolved from the container's env vars.
  • Shell $VAR escaping$VAR in command/entrypoint escaped to $$VAR for compose.
  • String replacements — user-defined replacements: from config applied to env vars, ConfigMap files, and reverse proxy upstreams.

Beyond single-host : Docker Swarm

The helmfile2compose distribution targets a single Docker host — bind mounts, no replicas, Caddy as the default reverse proxy, fix-permissions assuming a local filesystem. These are distribution choices, not engine limitations.

dekube-engine produces a service dict and dumps it as YAML — it doesn't validate or restrict what keys extensions put in. A provider can write deploy.replicas, deploy.placement, or any other section, and it will pass through to the output unchanged. The contracts (Converter, Provider, IngressRewriter, ConvertContext) have nothing mono-host-specific.

The format situation is unclear. Swarm and Compose both use YAML files with the same deploy: key for replicas, placement, and rolling updates — but they don't use the same spec. The Swarm docs say docker stack deploy uses the legacy Compose v3 format and that the current Compose Specification "isn't compatible". The Compose Deploy Specification documents deploy: as part of the current spec, supported by docker compose. Two pages, same domain, easy to misread. Docker's own docs AI confirmed: Swarm is stuck on legacy v3. Docker backported Swarm's deploy: key into the Compose Specification without upgrading Swarm to support the Compose Specification — so the key exists in both formats, but the surrounding spec diverged. Swarm still wants version: "3.x" in the header; the Compose Spec dropped the version: field entirely. A Swarm distribution would need to produce Compose v3. A Swarm-oriented distribution would need different monks:

Concern Current monk (single-host) Swarm equivalent
Volumes Bind mounts (host_path) Distributed volume drivers (NFS, GlusterFS) or named volumes with driver_opts
Replicas Ignored (single instance) deploy.replicas, placement constraints, rolling updates
Reverse proxy Caddy (standalone) Traefik in Swarm mode (service mesh, automatic service discovery)
Permissions fix-permissions (local chown) Likely unnecessary (volume drivers handle ownership) or different strategy
Ingress Single Caddyfile Traefik labels on services, Let's Encrypt via Traefik

The core engine, indexers, and most transforms would carry over unchanged. The providers and the ingress stack are where the opinions live.

Whether this is a good idea is a separate question. Swarm runs on a spec its own maintainers deprecated without ever upgrading — a format whose future is unclear at best, powering a proclaimed production-grade orchestrator. If you need multi-node scheduling, service mesh, and rolling updates, you already have Kubernetes. The case for helmfile2compose is clear: run the same apps on a single machine without the cluster overhead. The case for writing dekube extensions for Swarm is murkier — you'd be converting from one orchestrator to a less capable one. But the engine will not judge (it just reads YAML manifests, it produces a .yml). I — having built this abomination, pot, kettle, yadda — will.

Docker/Compose gotchas

Large port ranges, hostNetwork mapping, S3 virtual-hosted DNS — these are Docker/Compose runtime limitations, not conversion bugs. See Pitfalls — Docker/Compose gotchas for the list, and Limitations for what gets lost in translation.