Saltar al contenido principal

Your docker-compose.yml, Running on Kubernetes: How Filess Hosting Works

· 8 min de lectura
Filess Team
Database Experts

Every indie developer has been there: you've got a docker-compose.yml that works perfectly on your laptop. Redis, Postgres, your app — all wired up, healthy, humming.

Then you try to deploy it.

You either wrangle with a VPS, a Dockerfile, a reverse proxy, SSL certificates, and systemd units — or you pay a managed platform that doesn't understand docker-compose.yml natively and forces you to rewrite everything in their DSL.

Filess Hosting takes your docker-compose.yml and runs it on Kubernetes, with zero Kubernetes knowledge required. Here's the full technical picture of what happens when you push a commit.

The Build Pipeline: From Git to Running Container

When you trigger a deployment (manually, via API, or automatically on git push), the pipeline goes through three phases.

Phase 1: Preflight Analysis

Before building anything, we analyze your repository:

  • Parse the docker-compose.yml and validate all service definitions.
  • Resolve ${VARIABLE} interpolations against your environment variables, catching missing required variables early with clear error messages.
  • Validate exposed ports, Dockerfile paths, build context paths, and Docker build args.
  • Detect the build system: if you have a Dockerfile, we use it. If you have a docker-compose.yml with build: stanzas, we build all services. If neither, we fall back to Nixpacks for automatic detection.

This step fails fast. There's no 10-minute wait for a build to fail because you had a typo in your Compose file.

Phase 2: Building — Kaniko in the Cluster

We build container images using Kaniko, Google's daemonless Kubernetes-native image builder. This runs entirely inside the cluster — no Docker daemon, no privileged containers.

The build process:

  1. A Kaniko job is created in your project's Kubernetes namespace.
  2. Kaniko clones your repository (via HTTPS or SSH, depending on your repo credential configuration).
  3. It executes your Dockerfile steps layer by layer.
  4. The final image is pushed to our private container registry with a content-addressed tag based on the commit SHA.

Build logs stream in real time to the dashboard. If the build fails, the error is classified by origin (build_failed, preflight_failed, registry_error, etc.) and surfaced with a user-facing explanation rather than a raw Kubernetes error.

Phase 3: Runtime — Kubernetes Deployments

Once the image is in the registry, we apply Kubernetes resources:

  • A Deployment for each service, with resource limits derived from your plan.
  • ConfigMaps for non-sensitive environment variables.
  • Secrets for sensitive environment variables (anything you mark as secret in the dashboard).
  • PersistentVolumeClaims for any volumes you've defined.
  • An HTTPRoute (Gateway API) that maps your custom domain or subdomain to the main service.
  • A TLS certificate via our cert-manager integration.

The deployment is rolled out with a rolling update strategy. Zero downtime by default.


The Docker Compose Translator

The most technically interesting part is how we translate docker-compose.yml into Kubernetes resources. This isn't a direct 1:1 mapping — Compose and Kubernetes have different models — so we built our own translator.

A docker-compose.yml like this:

services:
web:
build: .
ports:
- "3000:3000"
environment:
DATABASE_URL: ${DATABASE_URL}
REDIS_URL: redis://cache:6379
depends_on:
- cache
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3

cache:
image: redis:7-alpine
volumes:
- redis_data:/data

volumes:
redis_data:

Gets translated into a ComposeRuntimeSpec:

{
"version": 1,
"composePath": "docker-compose.yml",
"mainService": "web",
"mainServiceSlug": "web",
"mainPort": 3000,
"services": [
{
"name": "web",
"slug": "web",
"runtimeName": "my-app-web",
"image": "registry.filess.io/my-project/web:sha256-abc123",
"ports": [3000],
"env": [
{ "key": "DATABASE_URL", "runtimeEnvKey": "DATABASE_URL" },
{ "key": "REDIS_URL", "value": "redis://my-app-cache:6379" }
],
"healthcheck": {
"command": ["CMD", "curl", "-f", "http://localhost:3000/health"],
"intervalSeconds": 30,
"timeoutSeconds": 10,
"retries": 3
}
},
{
"name": "cache",
"slug": "cache",
"runtimeName": "my-app-cache",
"image": "redis:7-alpine",
"ports": [],
"env": [],
"volumes": [{ "mountPath": "/data", "claimName": "my-app-redis-data" }]
}
],
"buildTasks": [
{
"name": "web",
"slug": "web",
"contextPath": ".",
"dockerfilePath": "Dockerfile",
"dockerBuildArgs": [],
"imageTag": "registry.filess.io/my-project/web:sha256-abc123"
}
]
}

This spec is stored in the database and used to idempotently reconcile the Kubernetes state. If you push a new commit, we diff the spec and apply only what changed.

Variable Interpolation

We implement the full Docker Compose variable interpolation spec:

SyntaxBehavior
${VAR}Use value of VAR; empty string if unset
${VAR:-default}Use VAR if set and non-empty, else default
${VAR-default}Use VAR if set (even if empty), else default
${VAR:?error}Error if VAR is unset or empty
${VAR?error}Error if VAR is unset
${VAR:+replacement}Use replacement if VAR is set and non-empty
$$Literal $

This matters because many real docker-compose.yml files use these constructs to make environments portable.


Preview Environments

This is the feature that changes how you do code review.

Every branch (or pull request) can get its own ephemeral deployment at a unique URL:

https://feature-login-redesign--my-app.filess.app

The URL is derived from a hash of the branch name and project ID — deterministic and permanent for the lifetime of the branch. When the PR is merged, the preview environment is automatically cleaned up.

The Prisma model behind this:

HostingPreviewEnvironment {
sourceType: branch | commit | tag
sourceRef: "feature/login-redesign"
subdomain: "feature-login-redesign--abc123"
closedAt: null (open) or DateTime (closed)
}

Preview environments share the same build infrastructure as production. They use the same container image (built once, deployed twice) but get their own Kubernetes namespace, their own environment variables, and their own URL.


Persistent Volumes

Stateful apps need persistent storage. You can define volumes in the dashboard or inherit them from your docker-compose.yml:

volumes:
uploaded_files:
driver: local

Each volume becomes a PersistentVolumeClaim in Kubernetes, mounted into the container at the specified path. Mount paths must be under /app or /data — we enforce this to prevent accidental mounts of system directories.

Volume sizes are specified in GiB and billed accordingly. Volumes survive deployments: if you push a new commit, your uploaded files stay where they were.


Health Checks and Restart Policies

If your docker-compose.yml defines a healthcheck, we translate it into a Kubernetes livenessProbe and readinessProbe. This means:

  • Kubernetes won't route traffic to your pod until it's ready.
  • If a pod becomes unhealthy, Kubernetes replaces it automatically.
  • The deployment rollout waits for new pods to become healthy before terminating old ones.

Restart policies are configurable:

  • Always (default): Restart on any exit.
  • OnFailure: Only restart on non-zero exit codes.
  • Never: Don't restart. Useful for one-shot job containers.

Custom Domains and TLS

Point any domain at Filess by adding a CNAME record:

app.yourdomain.com → cname.filess.app

Add the domain in the dashboard. We verify DNS ownership and provision a TLS certificate automatically. The certificate auto-renews before expiration. You don't manage any of this.

For *.yourdomain.com wildcard certificates on your domain, contact support.


Maintenance Mode

You can enable maintenance mode on any project without a deployment. When enabled, all traffic returns a configurable HTTP status code (503 by default) with a static maintenance page. This is a gateway-level feature — your containers don't need to change.

Bypass the maintenance page during a live incident by sending the X-Filess-Bypass-Maintenance header (value configured in the dashboard). This lets your team verify the fix before taking the site out of maintenance mode.


Deployment Rollback

Every deployment is immutable. The container image for commit abc123 will always be registry.filess.io/my-project/web:abc123. Rolling back is just re-deploying a previous commit — the image is already in the registry, so it's fast.

# Roll back via API
curl -X POST https://api.filess.io/v1/.../deployments/{previousDeploymentId}/rollback \
-H "Authorization: Bearer $FILESS_TOKEN"

There's no "undo" button that modifies history. You roll forward to a previous state. Clean audit trail.


Build Logs and Runtime Logs

All logs are available in the dashboard and via API:

  • Build logs: The full Kaniko output for each build. Captured and stored so you can debug failures from last week without hunting through a CI system.
  • Runtime logs: Live and historical logs from your running containers. Filter by service, time range, and log level.

Log redaction is automatic. We scan build and runtime logs for known secret patterns (registry credentials, API keys) and redact them before storing or displaying.


What's Next

Filess Hosting is built for developers who want to ship, not manage infrastructure. If you've been looking for a platform that respects your existing docker-compose.yml and doesn't ask you to rewrite your entire setup, give it a try.

Pair it with a Filess managed database and you've got a full production stack — app + database — in one dashboard.

Deploy your first app on Filess →