$ gcloud container clusters create mycluster
$ gcloud container clusters get-credentials mycluster
Hello World → [H,e,l,l,o, ,W,o,r,l,d]
Hello World → dlroW olleH
Hello World → hello world
Hello World → HELLO WORLD
FROM ruby:2.3.0-onbuildCMD ["ruby", "./arrayify.rb"]
FROM python:2.7.11-onbuildCMD [ "python", "./app.py" ]
FROM node:5.7.0-onbuild
FROM golang:1.6-onbuild
$ docker build -t gcr.io/<PROJECT_ID>/<CONTAINER_NAME>:<CONTAINER_VERSION> .
<PROJECT_ID>
<CONTAINER_VERSION>
gcr.io/<PROJECT_ID>/<CONTAINER_NAME>:<CONTAINER_VERSION>
<CONTAINER_NAME>
$ docker run -ti -p 8080:80 <CONTAINER_NAME>
$ docker-machine list
$ docker-machine ip <NAME_OF_INSTANCE>
$ gcloud docker push <CONTAINER_NAME>
$ kubectl run <SERVICE_NAME> \--image=<CONTAINER_NAME> \ --port=80
apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: arrayifyspec: replicas: 3 template: metadata: labels: name: arrayify-pods spec: containers: - image: <CONTAINER_NAME> name: arrayify-container imagePullPolicy: Always ports: - containerPort: 80 name: http-server
$ kubectl apply -f deployment.yaml
$ kubectl get deployments
$ kubectl get pods
apiVersion: v1kind: Servicemetadata: name: arrayifyspec: ports: - port: 80 targetPort: 80 protocol: TCP selector: name: arrayify-pods
$ kubectl create -f service.yaml
$ kubectl get svc
resolver 10.11.240.10 valid=5s;
$ kubectl get svc kube-dns --namespace=kube-system
upstream arrayify-backend { zone arrayify-backend 64k; server arrayify.default.svc.cluster.local resolve;}
Arrayify.default.svc.cluster.local
server { listen 80; status_zone backend-servers; location /arrayify/ { proxy_pass http://arrayify-backend/; }}
$ kubectl scale deployment arrayify --replicas=5
$ kubectl scale deployment arrayify --replicas=1
$ kubectl autoscale deployment arrayify --min=1 --max=5 --cpu-percent=80
$ docker build -t gcr.io/<PROJECT_ID>/arrayify:0.2 .
$ gcloud docker push gcr.io/<PROJECT_ID>/arrayify:0.2
$ kubectl rollout undo deployment/arrayify
Now, we could use Splunk or fluentd or logstash or some other great service for doing this, but our client is familiar with BigQuery, they like the SQL interface, and they have other datasets stored there already. As a bonus, they could run their own reports instead of having to talk to developers (and nobody wants to do that, not even developers).
Use promo code NEXT1720 to save $300 off general admission