class: title, self-paced Kubernetes 101
.nav[*Self-paced version*] .debug[ ``` ``` These slides have been built from commit: 8ef6219 [common/title.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/common/title.md)] --- class: title, in-person Kubernetes 101
.footnote[ **Be kind to the WiFi!**
*Don't use your hotspot.*
*Don't stream videos or download big files during the workshop.*
*Thank you!* **Slides: http://container.training/** ] .debug[[common/title.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/common/title.md)] --- ## Intros - Hello! We are: - .emoji[β¨] Ashley ([@ashleymcnamara](https://twitter.com/ashleymcnamara)) - .emoji[π] Brian ([@bketelsen](https://twitter.com/bketelsen)) - The workshop will run from 13:30-15:00 - Feel free to interrupt for questions at any time - *Especially when you see full screen container pictures!* .debug[[logistics.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/logistics.md)] --- ## About these slides - All the content is available in a public GitHub repository: https://github.com/jpetazzo/container.training - You can get updated "builds" of the slides there: http://container.training/ -- - Typos? Mistakes? Questions? Feel free to hover over the bottom of the slide ... .footnote[.emoji[π] Try it! The source file will be shown and you can view it on GitHub and fork and edit it.] .debug[[common/about-slides.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/common/about-slides.md)] --- class: extra-details ## Extra details - This slide has a little magnifying glass in the top left corner - This magnifying glass indicates slides that provide extra details - Feel free to skip them if: - you are in a hurry - you are new to this and want to avoid cognitive overload - you want only the most essential information - You can review these slides another time if you want, they'll be waiting for you βΊ .debug[[common/about-slides.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/common/about-slides.md)] --- name: toc-chapter-1 ## Chapter 1 - [Our sample application](#toc-our-sample-application) - [Kubernetes concepts](#toc-kubernetes-concepts) - [First contact with `kubectl`](#toc-first-contact-with-kubectl) - [Setting up Kubernetes](#toc-setting-up-kubernetes) .debug[(auto-generated TOC)] --- name: toc-chapter-2 ## Chapter 2 - [Running our first containers on Kubernetes](#toc-running-our-first-containers-on-kubernetes) - [Exposing containers](#toc-exposing-containers) - [Deploying a self-hosted registry](#toc-deploying-a-self-hosted-registry) - [Exposing services internally](#toc-exposing-services-internally) - [Exposing services for external access](#toc-exposing-services-for-external-access) .debug[(auto-generated TOC)] --- name: toc-chapter-3 ## Chapter 3 - [The Kubernetes dashboard](#toc-the-kubernetes-dashboard) - [Security implications of `kubectl apply`](#toc-security-implications-of-kubectl-apply) - [Scaling a deployment](#toc-scaling-a-deployment) - [Daemon sets](#toc-daemon-sets) - [Rolling updates](#toc-rolling-updates) - [Managing stacks with Helm](#toc-managing-stacks-with-helm) - [Namespaces](#toc-namespaces) - [Links and resources](#toc-links-and-resources) .debug[(auto-generated TOC)] .debug[[common/toc.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/common/toc.md)] --- ## Hands-on - All hands-on sections are clearly identified, like the gray rectangle below .exercise[ - This is the stuff you're supposed to do! - Go to http://container.training/ to view these slides ] - Each person gets a private cluster of cloud VMs (not shared with anybody else) - All you need is a computer (or even a phone or tablet!), with: - an internet connection - a web browser - an SSH client .debug[[common/prereqs.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/common/prereqs.md)] --- class: in-person ## Connecting to our lab environment .exercise[ - Log into the first VM (`node1`) with your SSH client - Check that you can SSH (without password) to `node2`: ```bash ssh node2 ``` - Type `exit` or `^D` to come back to `node1` ] If anything goes wrong β ask for help! .debug[[common/prereqs.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/common/prereqs.md)] --- ## Versions installed - Kubernetes 1.11.0 - Docker Engine 18.03.1-ce - Docker Compose 1.21.1 .exercise[ - Check all installed versions: ```bash kubectl version docker version docker-compose -v ``` ] .debug[[kube/versions-k8s.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/versions-k8s.md)] --- class: extra-details ## Kubernetes and Docker compatibility - Kubernetes 1.10.x only validates Docker Engine versions [1.11.2 to 1.13.1 and 17.03.x](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#external-dependencies) -- class: extra-details - Are we living dangerously? -- class: extra-details - "Validates" = continuous integration builds - The Docker API is versioned, and offers strong backward-compatibility (If a client uses e.g. API v1.25, the Docker Engine will keep behaving the same way) .debug[[kube/versions-k8s.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/versions-k8s.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/Container-Ship-Freighter-Navigation-Elbe-Romance-1782991.jpg)] --- name: toc-our-sample-application class: title Our sample application .nav[ [Previous section](#toc-) | [Back to table of contents](#toc-chapter-1) | [Next section](#toc-kubernetes-concepts) ] .debug[(automatically generated title slide)] --- # Our sample application - We will clone the GitHub repository onto our `node1` - The repository also contains scripts and tools that we will use through the workshop .exercise[ - Clone the repository on `node1`: ```bash git clone git://github.com/jpetazzo/container.training ``` ] (You can also fork the repository on GitHub and clone your fork if you prefer that.) .debug[[common/sampleapp.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/common/sampleapp.md)] --- ## Downloading and running the application Let's start this before we look around, as downloading will take a little time... .exercise[ - Go to the `dockercoins` directory, in the cloned repo: ```bash cd ~/container.training/dockercoins ``` - Use Compose to build and run all containers: ```bash docker-compose up ``` ] Compose tells Docker to build all container images (pulling the corresponding base images), then starts all containers, and displays aggregated logs. .debug[[common/sampleapp.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/common/sampleapp.md)] --- ## More detail on our sample application - Visit the GitHub repository with all the materials of this workshop:
https://github.com/jpetazzo/container.training - The application is in the [dockercoins]( https://github.com/jpetazzo/container.training/tree/master/dockercoins) subdirectory - Let's look at the general layout of the source code: there is a Compose file [docker-compose.yml]( https://github.com/jpetazzo/container.training/blob/master/dockercoins/docker-compose.yml) ... ... and 4 other services, each in its own directory: - `rng` = web service generating random bytes - `hasher` = web service computing hash of POSTed data - `worker` = background process using `rng` and `hasher` - `webui` = web interface to watch progress .debug[[common/sampleapp.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/common/sampleapp.md)] --- class: extra-details ## Compose file format version *Particularly relevant if you have used Compose before...* - Compose 1.6 introduced support for a new Compose file format (aka "v2") - Services are no longer at the top level, but under a `services` section - There has to be a `version` key at the top level, with value `"2"` (as a string, not an integer) - Containers are placed on a dedicated network, making links unnecessary - There are other minor differences, but upgrade is easy and straightforward .debug[[common/sampleapp.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/common/sampleapp.md)] --- ## What's this application? -- - It is a DockerCoin miner! .emoji[π°π³π¦π’] -- - No, you can't buy coffee with DockerCoins -- - How DockerCoins works: - `worker` asks to `rng` to generate a few random bytes - `worker` feeds these bytes into `hasher` - and repeat forever! - every second, `worker` updates `redis` to indicate how many loops were done - `webui` queries `redis`, and computes and exposes "hashing speed" in your browser .debug[[common/sampleapp.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/common/sampleapp.md)] --- ## Our application at work - On the left-hand side, the "rainbow strip" shows the container names - On the right-hand side, we see the output of our containers - We can see the `worker` service making requests to `rng` and `hasher` - For `rng` and `hasher`, we see HTTP access logs .debug[[common/sampleapp.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/common/sampleapp.md)] --- ## Connecting to the web UI - "Logs are exciting and fun!" (No-one, ever) - The `webui` container exposes a web dashboard; let's view it .exercise[ - With a web browser, connect to `node1` on port 8000 - Remember: the `nodeX` aliases are valid only on the nodes themselves - In your browser, you need to enter the IP address of your node ] A drawing area should show up, and after a few seconds, a blue graph will appear. .debug[[common/sampleapp.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/common/sampleapp.md)] --- class: self-paced, extra-details ## If the graph doesn't load If you just see a `Page not found` error, it might be because your Docker Engine is running on a different machine. This can be the case if: - you are using the Docker Toolbox - you are using a VM (local or remote) created with Docker Machine - you are controlling a remote Docker Engine When you run DockerCoins in development mode, the web UI static files are mapped to the container using a volume. Alas, volumes can only work on a local environment, or when using Docker4Mac or Docker4Windows. How to fix this? Stop the app with `^C`, edit `dockercoins.yml`, comment out the `volumes` section, and try again. .debug[[common/sampleapp.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/common/sampleapp.md)] --- class: extra-details ## Why does the speed seem irregular? - It *looks like* the speed is approximately 4 hashes/second - Or more precisely: 4 hashes/second, with regular dips down to zero - Why? -- class: extra-details - The app actually has a constant, steady speed: 3.33 hashes/second
(which corresponds to 1 hash every 0.3 seconds, for *reasons*) - Yes, and? .debug[[common/sampleapp.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/common/sampleapp.md)] --- class: extra-details ## The reason why this graph is *not awesome* - The worker doesn't update the counter after every loop, but up to once per second - The speed is computed by the browser, checking the counter about once per second - Between two consecutive updates, the counter will increase either by 4, or by 0 - The perceived speed will therefore be 4 - 4 - 4 - 0 - 4 - 4 - 0 etc. - What can we conclude from this? -- class: extra-details - "I'm clearly incapable of writing good frontend code!" π β JΓ©rΓ΄me .debug[[common/sampleapp.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/common/sampleapp.md)] --- ## Stopping the application - If we interrupt Compose (with `^C`), it will politely ask the Docker Engine to stop the app - The Docker Engine will send a `TERM` signal to the containers - If the containers do not exit in a timely manner, the Engine sends a `KILL` signal .exercise[ - Stop the application by hitting `^C` ] -- Some containers exit immediately, others take longer. The containers that do not handle `SIGTERM` end up being killed after a 10s timeout. If we are very impatient, we can hit `^C` a second time! .debug[[common/sampleapp.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/common/sampleapp.md)] --- ## Clean up - Before moving on, let's remove those containers .exercise[ - Tell Compose to remove everything: ```bash docker-compose down ``` ] .debug[[common/composedown.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/common/composedown.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/ShippingContainerSFBay.jpg)] --- name: toc-kubernetes-concepts class: title Kubernetes concepts .nav[ [Previous section](#toc-our-sample-application) | [Back to table of contents](#toc-chapter-1) | [Next section](#toc-first-contact-with-kubectl) ] .debug[(automatically generated title slide)] --- # Kubernetes concepts - Kubernetes is a container management system - It runs and manages containerized applications on a cluster -- - What does that really mean? .debug[[kube/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/concepts-k8s.md)] --- ## Basic things we can ask Kubernetes to do -- - Start 5 containers using image `atseashop/api:v1.3` -- - Place an internal load balancer in front of these containers -- - Start 10 containers using image `atseashop/webfront:v1.3` -- - Place a public load balancer in front of these containers -- - It's Black Friday (or Christmas), traffic spikes, grow our cluster and add containers -- - New release! Replace my containers with the new image `atseashop/webfront:v1.4` -- - Keep processing requests during the upgrade; update my containers one at a time .debug[[kube/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/concepts-k8s.md)] --- ## Other things that Kubernetes can do for us - Basic autoscaling - Blue/green deployment, canary deployment - Long running services, but also batch (one-off) jobs - Overcommit our cluster and *evict* low-priority jobs - Run services with *stateful* data (databases etc.) - Fine-grained access control defining *what* can be done by *whom* on *which* resources - Integrating third party services (*service catalog*) - Automating complex tasks (*operators*) .debug[[kube/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/concepts-k8s.md)] --- ## Kubernetes architecture .debug[[kube/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/concepts-k8s.md)] --- class: pic ![haha only kidding](images/k8s-arch1.png) .debug[[kube/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/concepts-k8s.md)] --- ## Kubernetes architecture - Ha ha ha ha - OK, I was trying to scare you, it's much simpler than that β€οΈ .debug[[kube/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/concepts-k8s.md)] --- class: pic ![that one is more like the real thing](images/k8s-arch2.png) .debug[[kube/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/concepts-k8s.md)] --- ## Credits - The first schema is a Kubernetes cluster with storage backed by multi-path iSCSI (Courtesy of [Yongbok Kim](https://www.yongbok.net/blog/)) - The second one is a simplified representation of a Kubernetes cluster (Courtesy of [Imesh Gunaratne](https://medium.com/containermind/a-reference-architecture-for-deploying-wso2-middleware-on-kubernetes-d4dee7601e8e)) .debug[[kube/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/concepts-k8s.md)] --- ## Kubernetes architecture: the nodes - The nodes executing our containers run a collection of services: - a container Engine (typically Docker) - kubelet (the "node agent") - kube-proxy (a necessary but not sufficient network component) - Nodes were formerly called "minions" (You might see that word in older articles or documentation) .debug[[kube/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/concepts-k8s.md)] --- ## Kubernetes architecture: the control plane - The Kubernetes logic (its "brains") is a collection of services: - the API server (our point of entry to everything!) - core services like the scheduler and controller manager - `etcd` (a highly available key/value store; the "database" of Kubernetes) - Together, these services form the control plane of our cluster - The control plane is also called the "master" .debug[[kube/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/concepts-k8s.md)] --- ## Running the control plane on special nodes - It is common to reserve a dedicated node for the control plane (Except for single-node development clusters, like when using minikube) - This node is then called a "master" (Yes, this is ambiguous: is the "master" a node, or the whole control plane?) - Normal applications are restricted from running on this node (By using a mechanism called ["taints"](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/)) - When high availability is required, each service of the control plane must be resilient - The control plane is then replicated on multiple nodes (This is sometimes called a "multi-master" setup) .debug[[kube/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/concepts-k8s.md)] --- ## Running the control plane outside containers - The services of the control plane can run in or out of containers - For instance: since `etcd` is a critical service, some people deploy it directly on a dedicated cluster (without containers) (This is illustrated on the first "super complicated" schema) - In some hosted Kubernetes offerings (e.g. GKE), the control plane is invisible (We only "see" a Kubernetes API endpoint) - In that case, there is no "master node" *For this reason, it is more accurate to say "control plane" rather than "master".* .debug[[kube/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/concepts-k8s.md)] --- ## Default container runtime - By default, Kubernetes uses the Docker Engine to run containers - We could also use `rkt` ("Rocket") from CoreOS - Or leverage other pluggable runtimes through the *Container Runtime Interface* (like CRI-O, or containerd) .footnote[More information about CRI [on the Kubernetes blog](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes)] .debug[[kube/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/concepts-k8s.md)] --- ## Kubernetes resources - The Kubernetes API defines a lot of objects called *resources* - These resources are organized by type, or `Kind` (in the API) - A few common resource types are: - node (a machine β physical or virtual β in our cluster) - pod (group of containers running together on a node) - IP addresses are associated with *pods*, not with individual containers - service (stable network endpoint to connect to one or multiple containers) - namespace (more-or-less isolated group of things) - secret (bundle of sensitive data to be passed to a container) And much more! (We can see the full list by running `kubectl get`) .debug[[kube/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/concepts-k8s.md)] --- class: pic ![Node, pod, container](images/k8s-arch3-thanks-weave.png) .debug[[kube/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/concepts-k8s.md)] --- ## Declarative vs imperative in Kubernetes - Virtually everything we create in Kubernetes is created from a *spec* - Watch for the `spec` fields in the YAML files later! - The *spec* describes *how we want the thing to be* - Kubernetes will *reconcile* the current state with the spec
(technically, this is done by a number of *controllers*) - When we want to change some resource, we update the *spec* - Kubernetes will then *converge* that resource .debug[[kube/declarative.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/declarative.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/aerial-view-of-containers.jpg)] --- name: toc-first-contact-with-kubectl class: title First contact with `kubectl` .nav[ [Previous section](#toc-kubernetes-concepts) | [Back to table of contents](#toc-chapter-1) | [Next section](#toc-setting-up-kubernetes) ] .debug[(automatically generated title slide)] --- class: extra-details # First contact with `kubectl` - `kubectl` is (almost) the only tool we'll need to talk to Kubernetes - It is a rich CLI tool around the Kubernetes API (Everything you can do with `kubectl`, you can do directly with the API) - On our machines, there is a `~/.kube/config` file with: - the Kubernetes API address - the path to our TLS certificates used to authenticate - You can also use the `--kubeconfig` flag to pass a config file - Or directly `--server`, `--user`, etc. - `kubectl` can be pronounced "Cube C T L", "Cube cuttle", "Cube cuddle"... .debug[[kube/kubectlget.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlget.md)] --- ## `kubectl get` - Let's look at our `Node` resources with `kubectl get`! .exercise[ - Look at the composition of our cluster: ```bash kubectl get node ``` - These commands are equivalent: ```bash kubectl get no kubectl get node kubectl get nodes ``` ] .debug[[kube/kubectlget.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlget.md)] --- ## Obtaining machine-readable output - `kubectl get` can output JSON, YAML, or be directly formatted .exercise[ - Give us more info about the nodes: ```bash kubectl get nodes -o wide ``` - Let's have some YAML: ```bash kubectl get no -o yaml ``` See that `kind: List` at the end? It's the type of our result! ] .debug[[kube/kubectlget.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlget.md)] --- ## (Ab)using `kubectl` and `jq` - It's super easy to build custom reports .exercise[ - Show the capacity of all our nodes as a stream of JSON objects: ```bash kubectl get nodes -o json | jq ".items[] | {name:.metadata.name} + .status.capacity" ``` ] .debug[[kube/kubectlget.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlget.md)] --- class: extra-details ## What's available? - `kubectl` has pretty good introspection facilities - We can list all available resource types by running `kubectl get` - We can view details about a resource with: ```bash kubectl describe type/name kubectl describe type name ``` - We can view the definition for a resource type with: ```bash kubectl explain type ``` Each time, `type` can be singular, plural, or abbreviated type name. .debug[[kube/kubectlget.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlget.md)] --- ## Services - A *service* is a stable endpoint to connect to "something" (In the initial proposal, they were called "portals") .exercise[ - List the services on our cluster with one of these commands: ```bash kubectl get services kubectl get svc ``` ] -- There is already one service on our cluster: the Kubernetes API itself. .debug[[kube/kubectlget.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlget.md)] --- ## ClusterIP services - A `ClusterIP` service is internal, available from the cluster only - This is useful for introspection from within containers .exercise[ - Try to connect to the API: ```bash curl -k https://`10.96.0.1` ``` - `-k` is used to skip certificate verification - Make sure to replace 10.96.0.1 with the CLUSTER-IP shown by `kubectl get svc` ] -- The error that we see is expected: the Kubernetes API requires authentication. .debug[[kube/kubectlget.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlget.md)] --- ## Listing running containers - Containers are manipulated through *pods* - A pod is a group of containers: - running together (on the same node) - sharing resources (RAM, CPU; but also network, volumes) .exercise[ - List pods on our cluster: ```bash kubectl get pods ``` ] -- *These are not the pods you're looking for.* But where are they?!? .debug[[kube/kubectlget.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlget.md)] --- ## Namespaces - Namespaces allow us to segregate resources .exercise[ - List the namespaces on our cluster with one of these commands: ```bash kubectl get namespaces kubectl get namespace kubectl get ns ``` ] -- *You know what ... This `kube-system` thing looks suspicious.* .debug[[kube/kubectlget.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlget.md)] --- ## Accessing namespaces - By default, `kubectl` uses the `default` namespace - We can switch to a different namespace with the `-n` option .exercise[ - List the pods in the `kube-system` namespace: ```bash kubectl -n kube-system get pods ``` ] -- *Ding ding ding ding ding!* The `kube-system` namespace is used for the control plane. .debug[[kube/kubectlget.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlget.md)] --- ## What are all these control plane pods? - `etcd` is our etcd server - `kube-apiserver` is the API server - `kube-controller-manager` and `kube-scheduler` are other master components - `kube-dns` is an additional component (not mandatory but super useful, so it's there) - `kube-proxy` is the (per-node) component managing port mappings and such - `weave` is the (per-node) component managing the network overlay - the `READY` column indicates the number of containers in each pod - the pods with a name ending with `-node1` are the master components
(they have been specifically "pinned" to the master node) .debug[[kube/kubectlget.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlget.md)] --- ## What about `kube-public`? .exercise[ - List the pods in the `kube-public` namespace: ```bash kubectl -n kube-public get pods ``` ] -- - Maybe it doesn't have pods, but what secrets is `kube-public` keeping? -- .exercise[ - List the secrets in the `kube-public` namespace: ```bash kubectl -n kube-public get secrets ``` ] -- - `kube-public` is created by kubeadm & [used for security bootstrapping](https://kubernetes.io/blog/2017/01/stronger-foundation-for-creating-and-managing-kubernetes-clusters) .debug[[kube/kubectlget.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlget.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/blue-containers.jpg)] --- name: toc-setting-up-kubernetes class: title Setting up Kubernetes .nav[ [Previous section](#toc-first-contact-with-kubectl) | [Back to table of contents](#toc-chapter-1) | [Next section](#toc-running-our-first-containers-on-kubernetes) ] .debug[(automatically generated title slide)] --- # Setting up Kubernetes - How did we set up these Kubernetes clusters that we're using? -- - We used `kubeadm` on freshly installed VM instances running Ubuntu 16.04 LTS 1. Install Docker 2. Install Kubernetes packages 3. Run `kubeadm init` on the master node 4. Set up Weave (the overlay network)
(that step is just one `kubectl apply` command; discussed later) 5. Run `kubeadm join` on the other nodes (with the token produced by `kubeadm init`) 6. Copy the configuration file generated by `kubeadm init` - Check the [prepare VMs README](https://github.com/jpetazzo/container.training/blob/master/prepare-vms/README.md) for more details .debug[[kube/setup-k8s.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/setup-k8s.md)] --- ## Other deployment options - If you are on Azure: [AKS](https://azure.microsoft.com/services/container-service/) - If you are on Google Cloud: [GKE](https://cloud.google.com/kubernetes-engine/) - If you are on AWS: [EKS](https://aws.amazon.com/eks/) or [kops](https://github.com/kubernetes/kops) - On a local machine: [minikube](https://kubernetes.io/docs/getting-started-guides/minikube/), [kubespawn](https://github.com/kinvolk/kube-spawn), [Docker4Mac](https://docs.docker.com/docker-for-mac/kubernetes/) - If you want something customizable: [kubicorn](https://github.com/kubicorn/kubicorn) Probably the closest to a multi-cloud/hybrid solution so far, but in development .debug[[kube/setup-k8s.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/setup-k8s.md)] --- ## Even more deployment options - If you like Ansible: [kubespray](https://github.com/kubernetes-incubator/kubespray) - If you like Terraform: [typhoon](https://github.com/poseidon/typhoon/) - You can also learn how to install every component manually, with the excellent tutorial [Kubernetes The Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way) *Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.* - There are also many commercial options available! - For a longer list, check the Kubernetes documentation:
it has a great guide to [pick the right solution](https://kubernetes.io/docs/setup/pick-right-solution/) to set up Kubernetes. .debug[[kube/setup-k8s.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/setup-k8s.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/chinook-helicopter-container.jpg)] --- name: toc-running-our-first-containers-on-kubernetes class: title Running our first containers on Kubernetes .nav[ [Previous section](#toc-setting-up-kubernetes) | [Back to table of contents](#toc-chapter-2) | [Next section](#toc-exposing-containers) ] .debug[(automatically generated title slide)] --- # Running our first containers on Kubernetes - First things first: we cannot run a container -- - We are going to run a pod, and in that pod there will be a single container -- - In that container in the pod, we are going to run a simple `ping` command - Then we are going to start additional copies of the pod .debug[[kube/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlrun.md)] --- ## Starting a simple pod with `kubectl run` - We need to specify at least a *name* and the image we want to use .exercise[ - Let's ping `1.1.1.1`, Cloudflare's [public DNS resolver](https://blog.cloudflare.com/announcing-1111/): ```bash kubectl run pingpong --image alpine ping 1.1.1.1 ``` ] -- OK, what just happened? .debug[[kube/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlrun.md)] --- ## Behind the scenes of `kubectl run` - Let's look at the resources that were created by `kubectl run` .exercise[ - List most resource types: ```bash kubectl get all ``` ] -- We should see the following things: - `deployment.apps/pingpong` (the *deployment* that we just created) - `replicaset.apps/pingpong-xxxxxxxxxx` (a *replica set* created by the deployment) - `pod/pingpong-xxxxxxxxxx-yyyyy` (a *pod* created by the replica set) Note: as of 1.10.1, resource types are displayed in more detail. .debug[[kube/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlrun.md)] --- ## What are these different things? - A *deployment* is a high-level construct - allows scaling, rolling updates, rollbacks - multiple deployments can be used together to implement a [canary deployment](https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#canary-deployments) - delegates pods management to *replica sets* - A *replica set* is a low-level construct - makes sure that a given number of identical pods are running - allows scaling - rarely used directly - A *replication controller* is the (deprecated) predecessor of a replica set .debug[[kube/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlrun.md)] --- class: extra-details ## Our `pingpong` deployment - `kubectl run` created a *deployment*, `deployment.apps/pingpong` ``` NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/pingpong 1 1 1 1 10m ``` - That deployment created a *replica set*, `replicaset.apps/pingpong-xxxxxxxxxx` ``` NAME DESIRED CURRENT READY AGE replicaset.apps/pingpong-7c8bbcd9bc 1 1 1 10m ``` - That replica set created a *pod*, `pod/pingpong-xxxxxxxxxx-yyyyy` ``` NAME READY STATUS RESTARTS AGE pod/pingpong-7c8bbcd9bc-6c9qz 1/1 Running 0 10m ``` - We'll see later how these folks play together for: - scaling, high availability, rolling updates .debug[[kube/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlrun.md)] --- ## Viewing container output - Let's use the `kubectl logs` command - We will pass either a *pod name*, or a *type/name* (E.g. if we specify a deployment or replica set, it will get the first pod in it) - Unless specified otherwise, it will only show logs of the first container in the pod (Good thing there's only one in ours!) .exercise[ - View the result of our `ping` command: ```bash kubectl logs deploy/pingpong ``` ] .debug[[kube/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlrun.md)] --- class: extra-details ## Streaming logs in real time - Just like `docker logs`, `kubectl logs` supports convenient options: - `-f`/`--follow` to stream logs in real time (Γ la `tail -f`) - `--tail` to indicate how many lines you want to see (from the end) - `--since` to get logs only after a given timestamp .exercise[ - View the latest logs of our `ping` command: ```bash kubectl logs deploy/pingpong --tail 1 --follow ``` ] .debug[[kube/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlrun.md)] --- ## Scaling our application - We can create additional copies of our container (I mean, our pod) with `kubectl scale` .exercise[ - Scale our `pingpong` deployment: ```bash kubectl scale deploy/pingpong --replicas 8 ``` ] Note: what if we tried to scale `replicaset.apps/pingpong-xxxxxxxxxx`? We could! But the *deployment* would notice it right away, and scale back to the initial level. .debug[[kube/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlrun.md)] --- ## Resilience - The *deployment* `pingpong` watches its *replica set* - The *replica set* ensures that the right number of *pods* are running - What happens if pods disappear? .exercise[ - In a separate window, list pods, and keep watching them: ```bash kubectl get pods -w ``` - Destroy a pod: ```bash kubectl delete pod pingpong-xxxxxxxxxx-yyyyy ``` ] .debug[[kube/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlrun.md)] --- ## What if we wanted something different? - What if we wanted to start a "one-shot" container that *doesn't* get restarted? - We could use `kubectl run --restart=OnFailure` or `kubectl run --restart=Never` - These commands would create *jobs* or *pods* instead of *deployments* - Under the hood, `kubectl run` invokes "generators" to create resource descriptions - We could also write these resource descriptions ourselves (typically in YAML),
and create them on the cluster with `kubectl apply -f` (discussed later) - With `kubectl run --schedule=...`, we can also create *cronjobs* .debug[[kube/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlrun.md)] --- clas: extra-details ## Viewing logs of multiple pods - When we specify a deployment name, only one single pod's logs are shown - We can view the logs of multiple pods by specifying a *selector* - A selector is a logic expression using *labels* - Conveniently, when you `kubectl run somename`, the associated objects have a `run=somename` label .exercise[ - View the last line of log from all pods with the `run=pingpong` label: ```bash kubectl logs -l run=pingpong --tail 1 ``` ] Unfortunately, `--follow` cannot (yet) be used to stream the logs from multiple containers. .debug[[kube/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlrun.md)] --- class: extra-details ## Aren't we flooding 1.1.1.1? - If you're wondering this, good question! - Don't worry, though: *APNIC's research group held the IP addresses 1.1.1.1 and 1.0.0.1. While the addresses were valid, so many people had entered them into various random systems that they were continuously overwhelmed by a flood of garbage traffic. APNIC wanted to study this garbage traffic but any time they'd tried to announce the IPs, the flood would overwhelm any conventional network.* (Source: https://blog.cloudflare.com/announcing-1111/) - It's very unlikely that our concerted pings manage to produce even a modest blip at Cloudflare's NOC! .debug[[kube/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlrun.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/container-cranes.jpg)] --- name: toc-exposing-containers class: title Exposing containers .nav[ [Previous section](#toc-running-our-first-containers-on-kubernetes) | [Back to table of contents](#toc-chapter-2) | [Next section](#toc-deploying-a-self-hosted-registry) ] .debug[(automatically generated title slide)] --- # Exposing containers - `kubectl expose` creates a *service* for existing pods - A *service* is a stable address for a pod (or a bunch of pods) - If we want to connect to our pod(s), we need to create a *service* - Once a service is created, `kube-dns` will allow us to resolve it by name (i.e. after creating service `hello`, the name `hello` will resolve to something) - There are different types of services, detailed on the following slides: `ClusterIP`, `NodePort`, `LoadBalancer`, `ExternalName` .debug[[kube/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlexpose.md)] --- ## Basic service types - `ClusterIP` (default type) - a virtual IP address is allocated for the service (in an internal, private range) - this IP address is reachable only from within the cluster (nodes and pods) - our code can connect to the service using the original port number - `NodePort` - a port is allocated for the service (by default, in the 30000-32768 range) - that port is made available *on all our nodes* and anybody can connect to it - our code must be changed to connect to that new port number These service types are always available. Under the hood: `kube-proxy` is using a userland proxy and a bunch of `iptables` rules. .debug[[kube/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlexpose.md)] --- ## More service types - `LoadBalancer` - an external load balancer is allocated for the service - the load balancer is configured accordingly
(e.g.: a `NodePort` service is created, and the load balancer sends traffic to that port) - `ExternalName` - the DNS entry managed by `kube-dns` will just be a `CNAME` to a provided record - no port, no IP address, no nothing else is allocated The `LoadBalancer` type is currently only available on AWS, Azure, and GCE. .debug[[kube/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlexpose.md)] --- ## Running containers with open ports - Since `ping` doesn't have anything to connect to, we'll have to run something else .exercise[ - Start a bunch of ElasticSearch containers: ```bash kubectl run elastic --image=elasticsearch:2 --replicas=7 ``` - Watch them being started: ```bash kubectl get pods -w ``` ] The `-w` option "watches" events happening on the specified resources. Note: please DO NOT call the service `search`. It would collide with the TLD. .debug[[kube/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlexpose.md)] --- ## Exposing our deployment - We'll create a default `ClusterIP` service .exercise[ - Expose the ElasticSearch HTTP API port: ```bash kubectl expose deploy/elastic --port 9200 ``` - Look up which IP address was allocated: ```bash kubectl get svc ``` ] .debug[[kube/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlexpose.md)] --- ## Services are layer 4 constructs - You can assign IP addresses to services, but they are still *layer 4* (i.e. a service is not an IP address; it's an IP address + protocol + port) - This is caused by the current implementation of `kube-proxy` (it relies on mechanisms that don't support layer 3) - As a result: you *have to* indicate the port number for your service - Running services with arbitrary port (or port ranges) requires hacks (e.g. host networking mode) .debug[[kube/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlexpose.md)] --- ## Testing our service - We will now send a few HTTP requests to our ElasticSearch pods .exercise[ - Let's obtain the IP address that was allocated for our service, *programmatically:* ```bash IP=$(kubectl get svc elastic -o go-template --template '{{ .spec.clusterIP }}') ``` - Send a few requests: ```bash curl http://$IP:9200/ ``` ] -- We may see `curl: (7) Failed to connect to _IP_ port 9200: Connection refused`. This is normal while the service starts up. -- Once it's running, our requests are load balanced across multiple pods. .debug[[kube/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlexpose.md)] --- class: extra-details ## If we don't need a load balancer - Sometimes, we want to access our scaled services directly: - if we want to save a tiny little bit of latency (typically less than 1ms) - if we need to connect over arbitrary ports (instead of a few fixed ones) - if we need to communicate over another protocol than UDP or TCP - if we want to decide how to balance the requests client-side - ... - In that case, we can use a "headless service" .debug[[kube/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlexpose.md)] --- class: extra-details ## Headless services - A headless service is obtained by setting the `clusterIP` field to `None` (Either with `--cluster-ip=None`, or by providing a custom YAML) - As a result, the service doesn't have a virtual IP address - Since there is no virtual IP address, there is no load balancer either - `kube-dns` will return the pods' IP addresses as multiple `A` records - This gives us an easy way to discover all the replicas for a deployment .debug[[kube/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlexpose.md)] --- class: extra-details ## Services and endpoints - A service has a number of "endpoints" - Each endpoint is a host + port where the service is available - The endpoints are maintained and updated automatically by Kubernetes .exercise[ - Check the endpoints that Kubernetes has associated with our `elastic` service: ```bash kubectl describe service elastic ``` ] In the output, there will be a line starting with `Endpoints:`. That line will list a bunch of addresses in `host:port` format. .debug[[kube/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlexpose.md)] --- class: extra-details ## Viewing endpoint details - When we have many endpoints, our display commands truncate the list ```bash kubectl get endpoints ``` - If we want to see the full list, we can use one of the following commands: ```bash kubectl describe endpoints elastic kubectl get endpoints elastic -o yaml ``` - These commands will show us a list of IP addresses - These IP addresses should match the addresses of the corresponding pods: ```bash kubectl get pods -l run=elastic -o wide ``` .debug[[kube/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlexpose.md)] --- class: extra-details ## `endpoints` not `endpoint` - `endpoints` is the only resource that cannot be singular ```bash $ kubectl get endpoint error: the server doesn't have a resource type "endpoint" ``` - This is because the type itself is plural (unlike every other resource) - There is no `endpoint` object: `type Endpoints struct` - The type doesn't represent a single endpoint, but a list of endpoints .debug[[kube/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlexpose.md)] --- class: title Our app on Kube .debug[[kube/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/ourapponkube.md)] --- class: extra-details ## What's on the menu? In this part, we will: - **build** images for our app, - **ship** these images with a registry, - **run** deployments using these images, - expose these deployments so they can communicate with each other, - expose the web UI so we can access it from outside. .debug[[kube/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/ourapponkube.md)] --- ## The plan - Build on our control node (`node1`) - Tag images so that they are named `$REGISTRY/servicename` - Upload them to a registry - Create deployments using the images - Expose (with a ClusterIP) the services that need to communicate - Expose (with a NodePort) the WebUI .debug[[kube/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/ourapponkube.md)] --- ## Which registry do we want to use? - We could use the Docker Hub - Or a service offered by our cloud provider (ACR, GCR, ECR...) - Or we could just self-host that registry *We'll self-host the registry because it's the most generic solution for this workshop.* .debug[[kube/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/ourapponkube.md)] --- ## Using the open source registry - We need to run a `registry:2` container
(make sure you specify tag `:2` to run the new version!) - It will store images and layers to the local filesystem
(but you can add a config file to use S3, Swift, etc.) - Docker *requires* TLS when communicating with the registry - unless for registries on `127.0.0.0/8` (i.e. `localhost`) - or with the Engine flag `--insecure-registry` - Our strategy: publish the registry container on a NodePort,
so that it's available through `127.0.0.1:xxxxx` on each node .debug[[kube/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/ourapponkube.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/container-housing.jpg)] --- name: toc-deploying-a-self-hosted-registry class: title Deploying a self-hosted registry .nav[ [Previous section](#toc-exposing-containers) | [Back to table of contents](#toc-chapter-2) | [Next section](#toc-exposing-services-internally) ] .debug[(automatically generated title slide)] --- # Deploying a self-hosted registry - We will deploy a registry container, and expose it with a NodePort .exercise[ - Create the registry service: ```bash kubectl run registry --image=registry:2 ``` - Expose it on a NodePort: ```bash kubectl expose deploy/registry --port=5000 --type=NodePort ``` ] .debug[[kube/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/ourapponkube.md)] --- ## Connecting to our registry - We need to find out which port has been allocated .exercise[ - View the service details: ```bash kubectl describe svc/registry ``` - Get the port number programmatically: ```bash NODEPORT=$(kubectl get svc/registry -o json | jq .spec.ports[0].nodePort) REGISTRY=127.0.0.1:$NODEPORT ``` ] .debug[[kube/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/ourapponkube.md)] --- ## Testing our registry - A convenient Docker registry API route to remember is `/v2/_catalog` .exercise[ - View the repositories currently held in our registry: ```bash curl $REGISTRY/v2/_catalog ``` ] -- We should see: ```json {"repositories":[]} ``` .debug[[kube/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/ourapponkube.md)] --- class: extra-details ## Testing our local registry - We can retag a small image, and push it to the registry .exercise[ - Make sure we have the busybox image, and retag it: ```bash docker pull busybox docker tag busybox $REGISTRY/busybox ``` - Push it: ```bash docker push $REGISTRY/busybox ``` ] .debug[[kube/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/ourapponkube.md)] --- class: extra-details ## Checking again what's on our local registry - Let's use the same endpoint as before .exercise[ - Ensure that our busybox image is now in the local registry: ```bash curl $REGISTRY/v2/_catalog ``` ] The curl command should now output: ```json {"repositories":["busybox"]} ``` .debug[[kube/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/ourapponkube.md)] --- ## Building and pushing our images - We are going to use a convenient feature of Docker Compose .exercise[ - Go to the `stacks` directory: ```bash cd ~/container.training/stacks ``` - Build and push the images: ```bash export REGISTRY export TAG=v0.1 docker-compose -f dockercoins.yml build docker-compose -f dockercoins.yml push ``` ] Let's have a look at the `dockercoins.yml` file while this is building and pushing. .debug[[kube/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/ourapponkube.md)] --- ```yaml version: "3" services: rng: build: dockercoins/rng image: ${REGISTRY-127.0.0.1:5000}/rng:${TAG-latest} deploy: mode: global ... redis: image: redis ... worker: build: dockercoins/worker image: ${REGISTRY-127.0.0.1:5000}/worker:${TAG-latest} ... deploy: replicas: 10 ``` .warning[Just in case you were wondering ... Docker "services" are not Kubernetes "services".] .debug[[kube/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/ourapponkube.md)] --- class: extra-details ## Avoiding the `latest` tag .warning[Make sure that you've set the `TAG` variable properly!] - If you don't, the tag will default to `latest` - The problem with `latest`: nobody knows what it points to! - the latest commit in the repo? - the latest commit in some branch? (Which one?) - the latest tag? - some random version pushed by a random team member? - If you keep pushing the `latest` tag, how do you roll back? - Image tags should be meaningful, i.e. correspond to code branches, tags, or hashes .debug[[kube/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/ourapponkube.md)] --- ## Deploying all the things - We can now deploy our code (as well as a redis instance) .exercise[ - Deploy `redis`: ```bash kubectl run redis --image=redis ``` - Deploy everything else: ```bash for SERVICE in hasher rng webui worker; do kubectl run $SERVICE --image=$REGISTRY/$SERVICE:$TAG done ``` ] .debug[[kube/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/ourapponkube.md)] --- ## Is this working? - After waiting for the deployment to complete, let's look at the logs! (Hint: use `kubectl get deploy -w` to watch deployment events) .exercise[ - Look at some logs: ```bash kubectl logs deploy/rng kubectl logs deploy/worker ``` ] -- π€ `rng` is fine ... But not `worker`. -- π‘ Oh right! We forgot to `expose`. .debug[[kube/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/ourapponkube.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/containers-by-the-water.jpg)] --- name: toc-exposing-services-internally class: title Exposing services internally .nav[ [Previous section](#toc-deploying-a-self-hosted-registry) | [Back to table of contents](#toc-chapter-2) | [Next section](#toc-exposing-services-for-external-access) ] .debug[(automatically generated title slide)] --- # Exposing services internally - Three deployments need to be reachable by others: `hasher`, `redis`, `rng` - `worker` doesn't need to be exposed - `webui` will be dealt with later .exercise[ - Expose each deployment, specifying the right port: ```bash kubectl expose deployment redis --port 6379 kubectl expose deployment rng --port 80 kubectl expose deployment hasher --port 80 ``` ] .debug[[kube/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/ourapponkube.md)] --- ## Is this working yet? - The `worker` has an infinite loop, that retries 10 seconds after an error .exercise[ - Stream the worker's logs: ```bash kubectl logs deploy/worker --follow ``` (Give it about 10 seconds to recover) ] -- We should now see the `worker`, well, working happily. .debug[[kube/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/ourapponkube.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/distillery-containers.jpg)] --- name: toc-exposing-services-for-external-access class: title Exposing services for external access .nav[ [Previous section](#toc-exposing-services-internally) | [Back to table of contents](#toc-chapter-2) | [Next section](#toc-the-kubernetes-dashboard) ] .debug[(automatically generated title slide)] --- # Exposing services for external access - Now we would like to access the Web UI - We will expose it with a `NodePort` (just like we did for the registry) .exercise[ - Create a `NodePort` service for the Web UI: ```bash kubectl expose deploy/webui --type=NodePort --port=80 ``` - Check the port that was allocated: ```bash kubectl get svc ``` ] .debug[[kube/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/ourapponkube.md)] --- ## Accessing the web UI - We can now connect to *any node*, on the allocated node port, to view the web UI .exercise[ - Open the web UI in your browser (http://node-ip-address:3xxxx/) ] -- *Alright, we're back to where we started, when we were running on a single node!* .debug[[kube/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/ourapponkube.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/lots-of-containers.jpg)] --- name: toc-the-kubernetes-dashboard class: title The Kubernetes dashboard .nav[ [Previous section](#toc-exposing-services-for-external-access) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-security-implications-of-kubectl-apply) ] .debug[(automatically generated title slide)] --- # The Kubernetes dashboard - Kubernetes resources can also be viewed with a web dashboard - We are going to deploy that dashboard with *three commands:* 1) actually *run* the dashboard 2) bypass SSL for the dashboard 3) bypass authentication for the dashboard -- .footnote[.warning[Yes, this will open our cluster to all kinds of shenanigans. Don't do this at home.]] .debug[[kube/dashboard.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/dashboard.md)] --- ## 1) Running the dashboard - We need to create a *deployment* and a *service* for the dashboard - But also a *secret*, a *service account*, a *role* and a *role binding* - All these things can be defined in a YAML file and created with `kubectl apply -f` .exercise[ - Create all the dashboard resources, with the following command: ```bash kubectl apply -f https://goo.gl/Qamqab ``` ] The goo.gl URL expands to:
.small[https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml] .debug[[kube/dashboard.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/dashboard.md)] --- ## 2) Bypassing SSL for the dashboard - The Kubernetes dashboard uses HTTPS, but we don't have a certificate - Recent versions of Chrome (63 and later) and Edge will refuse to connect (You won't even get the option to ignore a security warning!) - We could (and should!) get a certificate, e.g. with [Let's Encrypt](https://letsencrypt.org/) - ... But for convenience, for this workshop, we'll forward HTTP to HTTPS .warning[Do not do this at home, or even worse, at work!] .debug[[kube/dashboard.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/dashboard.md)] --- ## Running the SSL unwrapper - We are going to run [`socat`](http://www.dest-unreach.org/socat/doc/socat.html), telling it to accept TCP connections and relay them over SSL - Then we will expose that `socat` instance with a `NodePort` service - For convenience, these steps are neatly encapsulated into another YAML file .exercise[ - Apply the convenient YAML file, and defeat SSL protection: ```bash kubectl apply -f https://goo.gl/tA7GLz ``` ] The goo.gl URL expands to:
.small[.small[https://gist.githubusercontent.com/jpetazzo/c53a28b5b7fdae88bc3c5f0945552c04/raw/da13ef1bdd38cc0e90b7a4074be8d6a0215e1a65/socat.yaml]] .warning[All our dashboard traffic is now clear-text, including passwords!] .debug[[kube/dashboard.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/dashboard.md)] --- ## Connecting to the dashboard .exercise[ - Check which port the dashboard is on: ```bash kubectl -n kube-system get svc socat ``` ] You'll want the `3xxxx` port. .exercise[ - Connect to http://oneofournodes:3xxxx/ ] The dashboard will then ask you which authentication you want to use. .debug[[kube/dashboard.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/dashboard.md)] --- ## Dashboard authentication - We have three authentication options at this point: - token (associated with a role that has appropriate permissions) - kubeconfig (e.g. using the `~/.kube/config` file from `node1`) - "skip" (use the dashboard "service account") - Let's use "skip": we get a bunch of warnings and don't see much .debug[[kube/dashboard.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/dashboard.md)] --- ## 3) Bypass authentication for the dashboard - The dashboard documentation [explains how to do this](https://github.com/kubernetes/dashboard/wiki/Access-control#admin-privileges) - We just need to load another YAML file! .exercise[ - Grant admin privileges to the dashboard so we can see our resources: ```bash kubectl apply -f https://goo.gl/CHsLTA ``` - Reload the dashboard and enjoy! ] -- .warning[By the way, we just added a backdoor to our Kubernetes cluster!] .debug[[kube/dashboard.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/dashboard.md)] --- ## Running the Kubernetes dashboard securely - The steps that we just showed you are *for educational purposes only!* - If you do that on your production cluster, people [can and will abuse it](https://blog.redlock.io/cryptojacking-tesla) - For an in-depth discussion about securing the dashboard,
check [this excellent post on Heptio's blog](https://blog.heptio.com/on-securing-the-kubernetes-dashboard-16b09b1b7aca) .debug[[kube/dashboard.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/dashboard.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/plastic-containers.JPG)] --- name: toc-security-implications-of-kubectl-apply class: title Security implications of `kubectl apply` .nav[ [Previous section](#toc-the-kubernetes-dashboard) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-scaling-a-deployment) ] .debug[(automatically generated title slide)] --- # Security implications of `kubectl apply` - When we do `kubectl apply -f
`, we create arbitrary resources - Resources can be evil; imagine a `deployment` that ... -- - starts bitcoin miners on the whole cluster -- - hides in a non-default namespace -- - bind-mounts our nodes' filesystem -- - inserts SSH keys in the root account (on the node) -- - encrypts our data and ransoms it -- - β οΈβ οΈβ οΈ .debug[[kube/dashboard.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/dashboard.md)] --- ## `kubectl apply` is the new `curl | sh` - `curl | sh` is convenient - It's safe if you use HTTPS URLs from trusted sources -- - `kubectl apply -f` is convenient - It's safe if you use HTTPS URLs from trusted sources - Example: the official setup instructions for most pod networks -- - It introduces new failure modes (like if you try to apply yaml from a link that's no longer valid) .debug[[kube/dashboard.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/dashboard.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/train-of-containers-1.jpg)] --- name: toc-scaling-a-deployment class: title Scaling a deployment .nav[ [Previous section](#toc-security-implications-of-kubectl-apply) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-daemon-sets) ] .debug[(automatically generated title slide)] --- # Scaling a deployment - We will start with an easy one: the `worker` deployment .exercise[ - Open two new terminals to check what's going on with pods and deployments: ```bash kubectl get pods -w kubectl get deployments -w ``` - Now, create more `worker` replicas: ```bash kubectl scale deploy/worker --replicas=10 ``` ] After a few seconds, the graph in the web UI should show up.
(And peak at 10 hashes/second, just like when we were running on a single one.) .debug[[kube/kubectlscale.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/kubectlscale.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/train-of-containers-2.jpg)] --- name: toc-daemon-sets class: title Daemon sets .nav[ [Previous section](#toc-scaling-a-deployment) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-rolling-updates) ] .debug[(automatically generated title slide)] --- # Daemon sets - We want to scale `rng` in a way that is different from how we scaled `worker` - We want one (and exactly one) instance of `rng` per node - What if we just scale up `deploy/rng` to the number of nodes? - nothing guarantees that the `rng` containers will be distributed evenly - if we add nodes later, they will not automatically run a copy of `rng` - if we remove (or reboot) a node, one `rng` container will restart elsewhere - Instead of a `deployment`, we will use a `daemonset` .debug[[kube/daemonset.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/daemonset.md)] --- ## Daemon sets in practice - Daemon sets are great for cluster-wide, per-node processes: - `kube-proxy` - `weave` (our overlay network) - monitoring agents - hardware management tools (e.g. SCSI/FC HBA agents) - etc. - They can also be restricted to run [only on some nodes](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#running-pods-on-only-some-nodes) .debug[[kube/daemonset.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/daemonset.md)] --- ## Creating a daemon set - Unfortunately, as of Kubernetes 1.10, the CLI cannot create daemon sets -- - More precisely: it doesn't have a subcommand to create a daemon set -- - But any kind of resource can always be created by providing a YAML description: ```bash kubectl apply -f foo.yaml ``` -- - How do we create the YAML file for our daemon set? -- - option 1: [read the docs](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#create-a-daemonset) -- - option 2: `vi` our way out of it .debug[[kube/daemonset.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/daemonset.md)] --- ## Creating the YAML file for our daemon set - Let's start with the YAML file for the current `rng` resource .exercise[ - Dump the `rng` resource in YAML: ```bash kubectl get deploy/rng -o yaml --export >rng.yml ``` - Edit `rng.yml` ] Note: `--export` will remove "cluster-specific" information, i.e.: - namespace (so that the resource is not tied to a specific namespace) - status and creation timestamp (useless when creating a new resource) - resourceVersion and uid (these would cause... *interesting* problems) .debug[[kube/daemonset.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/daemonset.md)] --- ## "Casting" a resource to another - What if we just changed the `kind` field? (It can't be that easy, right?) .exercise[ - Change `kind: Deployment` to `kind: DaemonSet` - Save, quit - Try to create our new resource: ```bash kubectl apply -f rng.yml ``` ] -- We all knew this couldn't be that easy, right! .debug[[kube/daemonset.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/daemonset.md)] --- ## Understanding the problem - The core of the error is: ``` error validating data: [ValidationError(DaemonSet.spec): unknown field "replicas" in io.k8s.api.extensions.v1beta1.DaemonSetSpec, ... ``` -- - *Obviously,* it doesn't make sense to specify a number of replicas for a daemon set -- - Workaround: fix the YAML - remove the `replicas` field - remove the `strategy` field (which defines the rollout mechanism for a deployment) - remove the `status: {}` line at the end -- - Or, we could also ... .debug[[kube/daemonset.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/daemonset.md)] --- ## Use the `--force`, Luke - We could also tell Kubernetes to ignore these errors and try anyway - The `--force` flag's actual name is `--validate=false` .exercise[ - Try to load our YAML file and ignore errors: ```bash kubectl apply -f rng.yml --validate=false ``` ] -- π©β¨π -- Wait ... Now, can it be *that* easy? .debug[[kube/daemonset.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/daemonset.md)] --- ## Checking what we've done - Did we transform our `deployment` into a `daemonset`? .exercise[ - Look at the resources that we have now: ```bash kubectl get all ``` ] -- We have two resources called `rng`: - the *deployment* that was existing before - the *daemon set* that we just created We also have one too many pods.
(The pod corresponding to the *deployment* still exists.) .debug[[kube/daemonset.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/daemonset.md)] --- ## `deploy/rng` and `ds/rng` - You can have different resource types with the same name (i.e. a *deployment* and a *daemon set* both named `rng`) - We still have the old `rng` *deployment* ``` NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/rng 1 1 1 1 18m ``` - But now we have the new `rng` *daemon set* as well ``` NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/rng 2 2 2 2 2
9s ``` .debug[[kube/daemonset.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/daemonset.md)] --- ## What are all these pods doing? - Let's check the logs of all these `rng` pods - All these pods have a `run=rng` label: - the first pod, because that's what `kubectl run` does - the other ones (in the daemon set), because we *copied the spec from the first one* - Therefore, we can query everybody's logs using that `run=rng` selector .exercise[ - Check the logs of all the pods having a label `run=rng`: ```bash kubectl logs -l run=rng --tail 1 ``` ] -- It appears that *all the pods* are serving requests at the moment. .debug[[kube/daemonset.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/daemonset.md)] --- ## The magic of selectors - The `rng` *service* is load balancing requests to a set of pods - This set of pods is defined as "pods having the label `run=rng`" .exercise[ - Check the *selector* in the `rng` service definition: ```bash kubectl describe service rng ``` ] When we created additional pods with this label, they were automatically detected by `svc/rng` and added as *endpoints* to the associated load balancer. .debug[[kube/daemonset.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/daemonset.md)] --- ## Removing the first pod from the load balancer - What would happen if we removed that pod, with `kubectl delete pod ...`? -- The `replicaset` would re-create it immediately. -- - What would happen if we removed the `run=rng` label from that pod? -- The `replicaset` would re-create it immediately. -- ... Because what matters to the `replicaset` is the number of pods *matching that selector.* -- - But but but ... Don't we have more than one pod with `run=rng` now? -- The answer lies in the exact selector used by the `replicaset` ... .debug[[kube/daemonset.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/daemonset.md)] --- ## Deep dive into selectors - Let's look at the selectors for the `rng` *deployment* and the associated *replica set* .exercise[ - Show detailed information about the `rng` deployment: ```bash kubectl describe deploy rng ``` - Show detailed information about the `rng` replica:
(The second command doesn't require you to get the exact name of the replica set) ```bash kubectl describe rs rng-yyyy kubectl describe rs -l run=rng ``` ] -- The replica set selector also has a `pod-template-hash`, unlike the pods in our daemon set. .debug[[kube/daemonset.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/daemonset.md)] --- ## Deleting a deployment .exercise[ - Remove the `rng` deployment: ```bash kubectl delete deployment rng ``` ] -- - The pod that was created by the deployment is now being terminated: ``` $ kubectl get pods NAME READY STATUS RESTARTS AGE rng-54f57d4d49-vgz9h 1/1 Terminating 0 4m rng-vplmj 1/1 Running 0 11m rng-xbpvg 1/1 Running 0 11m [...] ``` Ding, dong, the deployment is dead! And the daemon set lives on. .debug[[kube/daemonset.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/daemonset.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/two-containers-on-a-truck.jpg)] --- name: toc-rolling-updates class: title Rolling updates .nav[ [Previous section](#toc-daemon-sets) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-managing-stacks-with-helm) ] .debug[(automatically generated title slide)] --- # Rolling updates - By default (without rolling updates), when a scaled resource is updated: - new pods are created - old pods are terminated - ... all at the same time - if something goes wrong, Β―\\\_(γ)\_/Β― .debug[[kube/rollout.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/rollout.md)] --- ## Rolling updates - With rolling updates, when a resource is updated, it happens progressively - Two parameters determine the pace of the rollout: `maxUnavailable` and `maxSurge` - They can be specified in absolute number of pods, or percentage of the `replicas` count - At any given time ... - there will always be at least `replicas`-`maxUnavailable` pods available - there will never be more than `replicas`+`maxSurge` pods in total - there will therefore be up to `maxUnavailable`+`maxSurge` pods being updated - We have the possibility to rollback to the previous version
(if the update fails or is unsatisfactory in any way) .debug[[kube/rollout.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/rollout.md)] --- ## Checking current rollout parameters - Recall how we build custom reports with `kubectl` and `jq`: .exercise[ - Show the rollout plan for our deployments: ```bash kubectl get deploy -o json | jq ".items[] | {name:.metadata.name} + .spec.strategy.rollingUpdate" ``` ] .debug[[kube/rollout.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/rollout.md)] --- ## Rolling updates in practice - As of Kubernetes 1.8, we can do rolling updates with: `deployments`, `daemonsets`, `statefulsets` - Editing one of these resources will automatically result in a rolling update - Rolling updates can be monitored with the `kubectl rollout` subcommand .debug[[kube/rollout.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/rollout.md)] --- ## Building a new version of the `worker` service .exercise[ - Go to the `stack` directory: ```bash cd ~/container.training/stacks ``` - Edit `dockercoins/worker/worker.py`, update the `sleep` line to sleep 1 second - Build a new tag and push it to the registry: ```bash #export REGISTRY=localhost:3xxxx export TAG=v0.2 docker-compose -f dockercoins.yml build docker-compose -f dockercoins.yml push ``` ] .debug[[kube/rollout.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/rollout.md)] --- ## Rolling out the new `worker` service .exercise[ - Let's monitor what's going on by opening a few terminals, and run: ```bash kubectl get pods -w kubectl get replicasets -w kubectl get deployments -w ``` - Update `worker` either with `kubectl edit`, or by running: ```bash kubectl set image deploy worker worker=$REGISTRY/worker:$TAG ``` ] -- That rollout should be pretty quick. What shows in the web UI? .debug[[kube/rollout.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/rollout.md)] --- ## Give it some time - At first, it looks like nothing is happening (the graph remains at the same level) - According to `kubectl get deploy -w`, the `deployment` was updated really quickly - But `kubectl get pods -w` tells a different story - The old `pods` are still here, and they stay in `Terminating` state for a while - Eventually, they are terminated; and then the graph decreases significantly - This delay is due to the fact that our worker doesn't handle signals - Kubernetes sends a "polite" shutdown request to the worker, which ignores it - After a grace period, Kubernetes gets impatient and kills the container (The grace period is 30 seconds, but [can be changed](https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods) if needed) .debug[[kube/rollout.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/rollout.md)] --- ## Rolling out something invalid - What happens if we make a mistake? .exercise[ - Update `worker` by specifying a non-existent image: ```bash export TAG=v0.3 kubectl set image deploy worker worker=$REGISTRY/worker:$TAG ``` - Check what's going on: ```bash kubectl rollout status deploy worker ``` ] -- Our rollout is stuck. However, the app is not dead (just 10% slower). .debug[[kube/rollout.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/rollout.md)] --- ## What's going on with our rollout? - Why is our app 10% slower? - Because `MaxUnavailable=1`, so the rollout terminated 1 replica out of 10 available - Okay, but why do we see 2 new replicas being rolled out? - Because `MaxSurge=1`, so in addition to replacing the terminated one, the rollout is also starting one more .debug[[kube/rollout.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/rollout.md)] --- class: extra-details ## The nitty-gritty details - We start with 10 pods running for the `worker` deployment - Current settings: MaxUnavailable=1 and MaxSurge=1 - When we start the rollout: - one replica is taken down (as per MaxUnavailable=1) - another is created (with the new version) to replace it - another is created (with the new version) per MaxSurge=1 - Now we have 9 replicas up and running, and 2 being deployed - Our rollout is stuck at this point! .debug[[kube/rollout.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/rollout.md)] --- ## Checking the dashboard during the bad rollout .exercise[ - Check which port the dashboard is on: ```bash kubectl -n kube-system get svc socat ``` ] Note the `3xxxx` port. .exercise[ - Connect to http://oneofournodes:3xxxx/ ] -- - We have failures in Deployments, Pods, and Replica Sets .debug[[kube/rollout.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/rollout.md)] --- ## Recovering from a bad rollout - We could push some `v0.3` image (the pod retry logic will eventually catch it and the rollout will proceed) - Or we could invoke a manual rollback .exercise[ - Cancel the deployment and wait for the dust to settle down: ```bash kubectl rollout undo deploy worker kubectl rollout status deploy worker ``` ] .debug[[kube/rollout.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/rollout.md)] --- ## Changing rollout parameters - We want to: - revert to `v0.1` - be conservative on availability (always have desired number of available workers) - be aggressive on rollout speed (update more than one pod at a time) - give some time to our workers to "warm up" before starting more The corresponding changes can be expressed in the following YAML snippet: .small[ ```yaml spec: template: spec: containers: - name: worker image: $REGISTRY/worker:v0.1 strategy: rollingUpdate: maxUnavailable: 0 maxSurge: 3 minReadySeconds: 10 ``` ] .debug[[kube/rollout.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/rollout.md)] --- ## Applying changes through a YAML patch - We could use `kubectl edit deployment worker` - But we could also use `kubectl patch` with the exact YAML shown before .exercise[ .small[ - Apply all our changes and wait for them to take effect: ```bash kubectl patch deployment worker -p " spec: template: spec: containers: - name: worker image: $REGISTRY/worker:v0.1 strategy: rollingUpdate: maxUnavailable: 0 maxSurge: 3 minReadySeconds: 10 " kubectl rollout status deployment worker kubectl get deploy -o json worker | jq "{name:.metadata.name} + .spec.strategy.rollingUpdate" ``` ] ] .debug[[kube/rollout.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/rollout.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/wall-of-containers.jpeg)] --- name: toc-managing-stacks-with-helm class: title Managing stacks with Helm .nav[ [Previous section](#toc-rolling-updates) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-namespaces) ] .debug[(automatically generated title slide)] --- # Managing stacks with Helm - We created our first resources with `kubectl run`, `kubectl expose` ... - We have also created resources by loading YAML files with `kubectl apply -f` - For larger stacks, managing thousands of lines of YAML is unreasonable - These YAML bundles need to be customized with variable parameters (E.g.: number of replicas, image version to use ...) - It would be nice to have an organized, versioned collection of bundles - It would be nice to be able to upgrade/rollback these bundles carefully - [Helm](https://helm.sh/) is an open source project offering all these things! .debug[[kube/helm.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/helm.md)] --- ## Helm concepts - `helm` is a CLI tool - `tiller` is its companion server-side component - A "chart" is an archive containing templatized YAML bundles - Charts are versioned - Charts can be stored on private or public repositories .debug[[kube/helm.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/helm.md)] --- ## Installing Helm - We need to install the `helm` CLI; then use it to deploy `tiller` .exercise[ - Install the `helm` CLI: ```bash curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash ``` - Deploy `tiller`: ```bash helm init ``` - Add the `helm` completion: ```bash . <(helm completion $(basename $SHELL)) ``` ] .debug[[kube/helm.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/helm.md)] --- ## Fix account permissions - Helm permission model requires us to tweak permissions - In a more realistic deployment, you might create per-user or per-team service accounts, roles, and role bindings .exercise[ - Grant `cluster-admin` role to `kube-system:default` service account: ```bash kubectl create clusterrolebinding add-on-cluster-admin \ --clusterrole=cluster-admin --serviceaccount=kube-system:default ``` ] (Defining the exact roles and permissions on your cluster requires a deeper knowledge of Kubernetes' RBAC model. The command above is fine for personal and development clusters.) .debug[[kube/helm.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/helm.md)] --- ## View available charts - A public repo is pre-configured when installing Helm - We can view available charts with `helm search` (and an optional keyword) .exercise[ - View all available charts: ```bash helm search ``` - View charts related to `prometheus`: ```bash helm search prometheus ``` ] .debug[[kube/helm.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/helm.md)] --- ## Install a chart - Most charts use `LoadBalancer` service types by default - Most charts require persistent volumes to store data - We need to relax these requirements a bit .exercise[ - Install the Prometheus metrics collector on our cluster: ```bash helm install stable/prometheus \ --set server.service.type=NodePort \ --set server.persistentVolume.enabled=false ``` ] Where do these `--set` options come from? .debug[[kube/helm.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/helm.md)] --- ## Inspecting a chart - `helm inspect` shows details about a chart (including available options) .exercise[ - See the metadata and all available options for `stable/prometheus`: ```bash helm inspect stable/prometheus ``` ] The chart's metadata includes an URL to the project's home page. (Sometimes it conveniently points to the documentation for the chart.) .debug[[kube/helm.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/helm.md)] --- ## Creating a chart - We are going to show a way to create a *very simplified* chart - In a real chart, *lots of things* would be templatized (Resource names, service types, number of replicas...) .exercise[ - Create a sample chart: ```bash helm create dockercoins ``` - Move away the sample templates and create an empty template directory: ```bash mv dockercoins/templates dockercoins/default-templates mkdir dockercoins/templates ``` ] .debug[[kube/helm.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/helm.md)] --- ## Exporting the YAML for our application - The following section assumes that DockerCoins is currently running .exercise[ - Create one YAML file for each resource that we need: .small[ ```bash while read kind name; do kubectl get -o yaml --export $kind $name > dockercoins/templates/$name-$kind.yaml done <
`Error: release loitering-otter failed: services "hasher" already exists` - To avoid naming conflicts, we will deploy the application in another *namespace* .debug[[kube/helm.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/helm.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/Container-Ship-Freighter-Navigation-Elbe-Romance-1782991.jpg)] --- name: toc-namespaces class: title Namespaces .nav[ [Previous section](#toc-managing-stacks-with-helm) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-links-and-resources) ] .debug[(automatically generated title slide)] --- # Namespaces - We cannot have two resources with the same name (Or can we...?) -- - We cannot have two resources *of the same type* with the same name (But it's OK to have a `rng` service, a `rng` deployment, and a `rng` daemon set!) -- - We cannot have two resources of the same type with the same name *in the same namespace* (But it's OK to have e.g. two `rng` services in different namespaces!) -- - In other words: **the tuple *(type, name, namespace)* needs to be unique** (In the resource YAML, the type is called `Kind`) .debug[[kube/namespaces.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/namespaces.md)] --- ## Pre-existing namespaces - If we deploy a cluster with `kubeadm`, we have three namespaces: - `default` (for our applications) - `kube-system` (for the control plane) - `kube-public` (contains one secret used for cluster discovery) - If we deploy differently, we may have different namespaces .debug[[kube/namespaces.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/namespaces.md)] --- ## Creating namespaces - Creating a namespace is done with the `kubectl create namespace` command: ```bash kubectl create namespace blue ``` - We can also get fancy and use a very minimal YAML snippet, e.g.: ```bash kubectl apply -f- <
(`redis.blue.svc.cluster.local` will be a `CNAME` record) - `ClusterIP` services with explicit `Endpoints`
(instead of letting Kubernetes generate the endpoints from a selector) - Ambassador services
(application-level proxies that can provide credentials injection and more) .debug[[kube/whatsnext.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/whatsnext.md)] --- ## Stateful services (second take) - If you really want to host stateful services on Kubernetes, you can look into: - volumes (to carry persistent data) - storage plugins - persistent volume claims (to ask for specific volume characteristics) - stateful sets (pods that are *not* ephemeral) .debug[[kube/whatsnext.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/whatsnext.md)] --- ## HTTP traffic handling - *Services* are layer 4 constructs - HTTP is a layer 7 protocol - It is handled by *ingresses* (a different resource kind) - *Ingresses* allow: - virtual host routing - session stickiness - URI mapping - and much more! - Check out e.g. [TrΓ¦fik](https://docs.traefik.io/user-guide/kubernetes/) .debug[[kube/whatsnext.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/whatsnext.md)] --- ## Logging and metrics - Logging is delegated to the container engine - Metrics are typically handled with [Prometheus](https://prometheus.io/) ([Heapster](https://github.com/kubernetes/heapster) is a popular add-on) .debug[[kube/whatsnext.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/whatsnext.md)] --- ## Managing the configuration of our applications - Two constructs are particularly useful: secrets and config maps - They allow to expose arbitrary information to our containers - **Avoid** storing configuration in container images (There are some exceptions to that rule, but it's generally a Bad Idea) - **Never** store sensitive information in container images (It's the container equivalent of the password on a post-it note on your screen) .debug[[kube/whatsnext.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/whatsnext.md)] --- ## Managing stack deployments - The best deployment tool will vary, depending on: - the size and complexity of your stack(s) - how often you change it (i.e. add/remove components) - the size and skills of your team - A few examples: - shell scripts invoking `kubectl` - YAML resources descriptions committed to a repo - [Helm](https://github.com/kubernetes/helm) (~package manager) - [Spinnaker](https://www.spinnaker.io/) (Netflix' CD platform) - [Brigade](https://brigade.sh/) (event-driven scripting; no YAML) .debug[[kube/whatsnext.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/whatsnext.md)] --- ## Cluster federation -- ![Star Trek Federation](images/startrek-federation.jpg) -- Sorry Star Trek fans, this is not the federation you're looking for! -- (If I add "Your cluster is in another federation" I might get a 3rd fandom wincing!) .debug[[kube/whatsnext.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/whatsnext.md)] --- ## Cluster federation - Kubernetes master operation relies on etcd - etcd uses the [Raft](https://raft.github.io/) protocol - Raft recommends low latency between nodes - What if our cluster spreads to multiple regions? -- - Break it down in local clusters - Regroup them in a *cluster federation* - Synchronize resources across clusters - Discover resources across clusters .debug[[kube/whatsnext.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/whatsnext.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/ShippingContainerSFBay.jpg)] --- name: toc-links-and-resources class: title Links and resources .nav[ [Previous section](#toc-namespaces) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-) ] .debug[(automatically generated title slide)] --- # Links and resources - [Kubernetes Community](https://kubernetes.io/community/) - Slack, Google Groups, meetups - [Azure Kubernetes Service](https://docs.microsoft.com/azure/aks/) - [Cloud Developer Advocates](https://developer.microsoft.com/advocates/) - [Local meetups](https://www.meetup.com/) .footnote[These slides (and future updates) are on β http://container.training/] .debug[[kube/links.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/kube/links.md)] --- class: title, self-paced Thank you! .debug[[common/thankyou.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/common/thankyou.md)] --- class: title, in-person That's all, folks!
Questions? ![end](images/end.jpg) .debug[[common/thankyou.md](https://github.com/jpetazzo/container.training/tree/devopsdaysmsp2018/slides/common/thankyou.md)]