Kubernetes Jobs are useful for one-off tasks. However, there are some problems when you have to define sidecar containers in your job spec. Primarily, the job’s pod will not terminate when the sidecar containers are still running. If your sidecar container is a logging agent or a proxy for other services, they usually do not terminate. Furthermore, the sidecar container must terminate with an exit code of 0 or else the job may restart.

One suggested solution is to have a script watch for the creation of a file on a shared volume. When the script detects a file, the script will terminate the container. For instance, here is a sample job spec which waits for a file to be created in a shared volume in a sidecar container:

apiVersion: v1
kind: ConfigMap
metadata:
  name: watchfile-config-map
  labels:
    name: watchfile-config-map
data:
  watchfile.sh: |-
    apk update && apk add inotify-tools
    echo "waiting for file..."
    file=/var/lib/sharedwatchfile/file.unlock
    while [ ! -f "$file" ]
    do
      inotifywait -qqt 10 -e create -e moved_to "$(dirname $file)"
    done
    echo "found file"
---
apiVersion: batch/v1
kind: Job
metadata:
  name: db-migration
spec:
  template:
    spec:
      containers:
      - name: db-migration
        image: <your job image>
        command: ["/bin/sh",
                  "-c",
                  "<run db migration script> && touch /var/lib/sharedwatchfile/file.unlock"]
        volumeMounts:
        - name: varlibsharedwatchfile
          mountPath: /var/lib/sharedwatchfile
      - name: cloudsql-proxy
        image: gcr.io/cloudsql-docker/gce-proxy:1.11
        command: ["/bin/sh",
                  "-c",
                  "/cloud_sql_proxy -instances=<your db instance>=tcp:5432 -credential_file=/secrets/cloudsql/credentials.json & /bin/sh /var/lib/watchfile/watchfile.sh"]
        volumeMounts:
        - name: cloudsql-instance-credentials
          mountPath: /secrets/cloudsql
          readOnly: true
        - name: varlibwatchfile
          mountPath: /var/lib/watchfile
          readOnly: true
        - name: varlibsharedwatchfile
          mountPath: /var/lib/sharedwatchfile
          readOnly: true
      volumes:
      - name: cloudsql-instance-credentials
        secret:
          secretName: sql-kubernetes-proxy-credentials
      - name: varlibwatchfile
        configMap:
          name: watchfile-config-map
          items:
            - key: watchfile.sh
              path: watchfile.sh
      - name: varlibsharedwatchfile
      restartPolicy: Never
  backoffLimit: 4

The usage of inotifywait is to be a bit more efficient than just using sleep. In the above example, instead of modifying an existing image, the commands to run the sidecar container are slightly modified and a script is mounted via a volume to the sidecar container.

While watching for a file to be created is not exactly ideal, it is a quick workable hack until a general solution is available.

When using Terraform, I find that storing state remotely has great benefits. If you work with others or on multiple machines, remote state allows re-using Terraform defined infrastructure without copying the state manually to all other users. More importantly, it allows a “core” set of resources to be defined and owned by one project while the root level output resources are re-usable in other related Terraform projects.

To store state remotely, add a backend to store the state such as:

terraform {
  backend "s3" {
    bucket = "<your bucket name>"
    key = "default"
    region = "us-east-1"
  }
}

Then you need to run terraform init after adding the backend to your Terraform config.

To import remote state (say you have a core infrastructure Terraform project), add another resource to import:

data "terraform_remote_state" "core_infrastructure" {
  backend = "s3"
  workspace = "${terraform.workspace}"
  config {
    bucket = "<bucket with state to import>"
    key = "default"
    region = "us-east-1"
  }
}

The core infrastructure that I generally have are definitions for DNS zones (so related projects can import the DNS managed zone identifier and create subdomains), wildcard SSL certificates for test domains, and general repository definitions for where the code is stored.

If you have multiple users, you will need to look into remote state locking solutions as well with your backends,

For my $HOME/.npmrc config, I have the following config:

prefix=~/.npm-global

So when installing npm modules globally, they will be installed in a directory that’s owned by my user instead of to a system directory. This prevents write permission issues when trying to install modules not owned by my user.