Yusuf Hadiwinata Sutandar LinuxGeek,OpenSourceEnthusiast,SecurityHobbies Journey to the Devops AutomationJourney to the Devops Automation with Docker, Kuberneteswith Docker, Kubernetes and OpenShiftand OpenShift
AgendaAgenda ● Container & Docker IntroductionContainer & Docker Introduction ● Installing Docker Container & ManagementInstalling Docker Container & Management ● Managing Docker with PortainerManaging Docker with Portainer ● Managing Docker with Openshift OriginManaging Docker with Openshift Origin ● Build Simple Docker ApplicationBuild Simple Docker Application ● DiscussionDiscussion
Traditional DevelopmentTraditional Development In the world of business, a "silo" is anyIn the world of business, a "silo" is any system within an organization that is closedsystem within an organization that is closed off to other systems. Silos tend to constructoff to other systems. Silos tend to construct themselves inadvertently, as differentthemselves inadvertently, as different managers take on various priorities andmanagers take on various priorities and responsibilities within the organization. Overresponsibilities within the organization. Over time, departments gradually focus more andtime, departments gradually focus more and more inward and pay less attention to whatmore inward and pay less attention to what everyone else is doing.everyone else is doing.
Traditional ProblemTraditional Problem
Credit: Amit KumarCredit: Amit Kumar
How DevOps Breaks Down SilosHow DevOps Breaks Down Silos DevOps Help Improve Collaboration BetweenDevOps Help Improve Collaboration Between Developers and IT Operations – DepoymentDevelopers and IT Operations – Depoyment Automation & Self ServicesAutomation & Self Services
On the other hand, DevOps culture is anti-siloOn the other hand, DevOps culture is anti-silo by its very nature, while still retaining theby its very nature, while still retaining the subject matter experts that are so crucial tosubject matter experts that are so crucial to the software development process. DevOpsthe software development process. DevOps requires developers, QA testers, operationsrequires developers, QA testers, operations engineers, and product managers to workengineers, and product managers to work closely together from the very beginning,closely together from the very beginning, which means that any existing silos will havewhich means that any existing silos will have to disappear quickly.to disappear quickly. Credit: Amit KumarCredit: Amit Kumar
DevOps Tools & SoftwareDevOps Tools & Software
Brief Intro to Container & DockerBrief Intro to Container & Docker History of ContainerHistory of Container Docker IntroductionDocker Introduction
The ProblemThe Problem Cargo Transport 1960sCargo Transport 1960s
Solution?Solution?
Solution?Solution?Intermodal Shipping ContainerIntermodal Shipping Container
The SolutionThe Solution 90% of all cargo now shipped in a90% of all cargo now shipped in a standard containerstandard container Order of magnitude reduction in costOrder of magnitude reduction in cost and time to load and unload ships,and time to load and unload ships, trains, truckstrains, trucks
The EvolutionThe Evolution
The App Deployment ProblemThe App Deployment Problem
The App Deployment SolutionThe App Deployment Solution
The App Deployment SolutionThe App Deployment Solution
Container TechnologyContainer Technology One way of looking at containers is as improved chroot jails. Containers allow an operating system (OS) process (or a process tree) to run isolated from other processes hosted by the same OS. Through the use of Linux kernel namespaces, it is possible to restrict a process view of: – Other processes (including the pid number space) – File systems – User and group IDs – IPC channels – Devices – Networking
Container TechnologyContainer Technology Other Linux kernel features complement the process isolation provided by kernel namespaces: – Cgroups limit the use of CPU, RAM, virtual memory, and I/O bandwidth, among other hardware and kernel resources. – Capabilities assign partial administrative faculties; for example, enabling a process to open a low network port (<1024) without allowing it to alter routing tables or change file ownership. – SELinux enforces mandatory access policies even if the code inside the container finds a way to break its isolation
Container TechnologyContainer Technology Image BImage A Images & Containers 25 ●Docker “Image” • Unified Packaging format • Like “war” or “tar.gz” • For any type of Application • Portable ●Docker “Container” • Runtime • Isolation Hardware Container APP A Image Host Minimal OS Container APP B Image Container APP C Image Docker Engine Docker Registry RHEL JDK Jboss-EAP Libs A Libs B App A App B docker pull <image>
Major Advantages of ContainersMajor Advantages of Containers ● Low hardware footprint – Uses OS internal features to create an isolated environment where resources are managed using OS facilities such as namespaces and cgroups. This approach minimizes the amount of CPU and memory overhead compared to a virtual machine hypervisor. Running an application in a VM is a way to create isolation from the running environment, but it requires a heavy layer of services to support the same low hardware footprint isolation provided by containers. ● Environment isolation – Works in a closed environment where changes made to the host OS or other applications do not affect the container. Because the libraries needed by a container are self-contained, the application can run without disruption. For example, each application can exist in its own container with its own set of libraries. An update made to one container does not affect other containers, which might not work with the update.
Major Advantages of ContainersMajor Advantages of Containers cont..cont..● Quick deployment – Deploys any container quickly because there is no need for a full OS install or restart. Normally, to support the isolation, a new OS installation is required on a physical host or VM, and any simple update might require a full OS restart. A container only requires a restart without stopping any services on the host OS. ● Multiple environment deployment – In a traditional deployment scenario using a single host, any environment differences might potentially break the application. Using containers, however, the differences and incompatibilities are mitigated because the same container image is used. ● Reusability – The same container can be reused by multiple applications without the need to set up a full OS. A database container can be used to create a set of tables for a software application, and it can be quickly destroyed and recreated without the need to run a set of housekeeping tasks. Additionally, the same database container can be used by the production environment to deploy an application.
Why are ContainersWhy are Containers Lightweight?Lightweight?containers ascontainers as lightweight VMslightweight VMs
Is not Virtualizaiton :)Is not Virtualizaiton :) Linux Kernel App1 App2 App3 Isolation, not Virtualization • Kernel Namespaces • Process • Network • IPC • Mount • User • Resource Limits • Cgroups • Security • SELinux
Container SolutionContainer Solution Virtual Machine and Container Complement each otherVirtual Machine and Container Complement each other Containers ● Containers run as isolated processes in user space of host OS ● They share the kernel with other container (container-processes) ● Containers include the application and all of its dependencies ● Not tied to specific infrastructure Virtual Machine ● Virtual machines include the application, the necessary binaries and libraries, and an entire guest operating system ● Each Guest OS has its own Kernel and user space
Container ProblemContainer Problem Containers before DockerContainers before Docker ● No standardized exchange format. (No, a rootfs tarball is not a format!) ● Containers are hard to use for developers. (Where's the equivalent of docker run debian?) ● No re-usable components, APIs, tools. (At best: VM abstractions, e.g. libvirt.) Analogy: ● Shipping containers are not just steel boxes. ● They are steel boxes that are a standard size, with the same hooks and holes
Docker SolutionDocker Solution Containers after DockerContainers after Docker ● Standardize the container format, because containers were not portable. ● Make containers easy to use for developers. ● Emphasis on re-usable components, APIs, ecosystem of standard tools. ● Improvement over ad-hoc, in-house, specific tools.
Docker SolutionDocker Solution Docker ArchitectureDocker Architecture Docker is one of the container implementations available for deployment and supported by companies such as Red Hat in their Red Hat Enterprise Linux Atomic Host platform. Docker Hub provides a large set of containers developed by the community.
Docker SolutionDocker Solution Docker Core ElementsDocker Core Elements ● Images ● Images are read-only templates that contain a runtime environment that includes application libraries and applications. Images are used to create containers. Images can be created, updated, or downloaded for immediate consumption. ● Registries ● Registries store images for public or private use. The well-known public registry is Docker Hub, and it stores multiple images developed by the community, but private registries can be created to support internal image development under a company's discretion. This course runs on a private registry in a virtual machine where all the required images are stored for faster consumption. ● Containers ● Containers are segregated user-space environments for running applications isolated from other applications sharing the same host OS.
Basics of a Docker System?Basics of a Docker System?
Ecosystem SupportEcosystem Support ● DevOps Tools ● Integrations with Chef, Puppet, Jenkins, Travis, Salt, Ansible +++ ● Orchestration tools ● Mesos, Heat, ++ ● Shipyard & others purpose built for Docker ● Applications ● 1000’s of Dockerized applications available at index.docker.io
Ecosystem Support Continue..Ecosystem Support Continue.. ● Operating systems ● Virtually any distribution with a 2.6.32+ kernel ● Red Hat/Docker collaboration to make work across RHEL 6.4+, Fedora, and other members of the family (2.6.32 +) ● CoreOS—Small core OS purpose built with Docker ● OpenStack ● Docker integration into NOVA (& compatibility with Glance, Horizon, etc.) accepted for Havana release ● Private PaaS ● OpenShift, Solum (Rackspace, OpenStack), Other TBA ● Public PaaS ● Deis, Voxoz, Cocaine (Yandex), Baidu PaaS ● Public IaaS ● Native support in Rackspace, Digital Ocean,+++ ● AMI (or equivalent) available for AWS & other
What IT`s Said about Docker:What IT`s Said about Docker: Developer Say: Build Once, Run Anywhere Operator: Configure Once, Run Anything
Why Developers CareWhy Developers Care Developer Say: Build Once, Run Anywhere A clean, safe, hygienic, portable runtime environment for your app. No worries about missing dependencies, packages and other pain points during subsequent deployments. Run each app in its own isolated container, so you can run various versions of libraries and other dependencies for each app without worrying. Automate testing, integration, packaging...anything you can script. Reduce/eliminate concerns about compatibility on different platforms, either your own or your customers. Cheap, zero-penalty containers to deploy services. A VM without the overhead of a VM. Instant replay and reset of image snapshots.
Why Administrators CareWhy Administrators Care Operator: Configure Once, Run Anything Make the entire lifecycle more efficient, consistent, and repeatable Increase the quality of code produced by developers. Eliminate inconsistencies between development, test, production, and customer environments. Support segregation of duties. Significantly improves the speed and reliability of continuous deployment and continuous integration systems. Because the containers are so lightweight, address significant performance, costs, deployment, and portability issues normally associated with VMs.
Managing Docker ContainersManaging Docker Containers ● Installing DockerInstalling Docker ● create/start/stop/remove containerscreate/start/stop/remove containers ● inspect containersinspect containers ● interact, commit new imagesinteract, commit new images
Lab: Installing Docker - RequirmentLab: Installing Docker - Requirment ● Requirment: ● Centos 7.3 Minimal Install ● Latest update with “yum -y update” ● 16 GB OS Disk ● 16 GB Unpartition Disk for Docker Storage ● 2 GB Ram and 2 vCPU ● Bridge Network for connecting to Internet and Accessing from Host (laptop) ● Snapshoot VM
Lab: Installing Docker - PreSetupLab: Installing Docker - PreSetup ● Setting hostname at /etc/hosts file and server: [root@docker-host ~]# cat /etc/hosts | grep docker-host 192.168.0.6 docker-host [root@docker-host ~]# hostnamectl set-hostname docker-host [root@docker-host ~]# hostname docker-host ● Install needed packages and Latest Docker [root@docker-host ~]# yum install wget git net-tools bind-utils iptables-services bridge-utils bash-completion [root@docker-host ~]# curl -fsSL https://get.docker.com/ | sh ● Edit /etc/sysconfig/docker file and add --insecure-registry 172.30.0.0/16 to the OPTIONS parameter (installing docker from repo only) [root@docker-host ~]# sed -i '/OPTIONS=.*/cOPTIONS="--selinux-enabled --insecure- registry 172.30.0.0/16"' /etc/sysconfig/docker [root@docker-host ~]# systemctl is-active docker ; systemctl enable docker ; systemctl start docker Pulling your 1st container from internet [root@docker-host ~]# docker container run -ti ubuntu bash
Lab: Installing Docker - PreSetupLab: Installing Docker - PreSetup ● When you use Latest version docker, please configure this [root@docker-host ~]# vim /usr/lib/systemd/system/docker.service Edit ExecStart=/usr/bin/dockerd to ExecStart=/usr/bin/dockerd --insecure-registry 172.30.0.0/16 --insecure-registry 192.168.1.0/24 [root@docker-host ~]# systemctl daemon-reload ; systemctl restart docker ● Optional Configuration for private registry [root@docker-host ~]# vim /etc/docker/daemon.json Add { "insecure-registries" : ["docker-registry:5000"] } [root@docker-host ~]# systemctl restart docker Pulling your 1st container from private registry [root@docker-host ~]# docker container run -ti docker-registry:5000/ubuntu bash
Installing Docker – Setting Docker StorageInstalling Docker – Setting Docker Storage ● Setting up a volume group and LVM thin pool on user specified block device [root@docker-host ~]# echo DEVS=/dev/sdb >> /etc/sysconfig/docker-storage-setup [root@docker-host ~]# systemctl restart docker ● By default, docker-storage-setup looks for free space in the root volume group and creates an LVM thin pool. Hence you can leave free space during system installation in the root volume group and starting docker will automatically set up a thin pool and use it. ● LVM thin pool in a user specified volume group [root@docker-host ~]# echo VG=docker-vg >> /etc/sysconfig/docker-storage-setup [root@docker-host ~]# systemctl restart docker ● https://access.redhat.com/documentation/en- us/red_hat_enterprise_linux_atomic_host/7/html/managing_containers/managing_storage_ with_docker_formatted_containers ● Setting up a volume group and LVM thin pool on user specified block device for docker version 1.12
Lab: 1Lab: 1stst time Playing w/ Dockertime Playing w/ Docker [root@docker-host ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [root@docker-host ~]# docker run -t centos bash Unable to find image 'centos:latest' locally Trying to pull repository docker.io/library/centos ... sha256:aebf12af704307dfa0079b3babdca8d7e8ff6564696882bcb5d11f1d461f9ee9: Pulling from docker.io/library/centos d5e46245fe40: Pull complete Digest: sha256:aebf12af704307dfa0079b3babdca8d7e8ff6564696882bcb5d11f1d461f9ee9 Status: Downloaded newer image for docker.io/centos:latest [root@docker-host ~]# docker images --all REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/centos latest 3bee3060bfc8 46 hours ago 192.5 MB [root@docker-host ~]# docker exec -it 60fec4b9a9bf bash [root@60fec4b9a9bf /]# ps ax PID TTY STAT TIME COMMAND 1 ? Ss+ 0:00 bash 29 ? Ss 0:00 bash 42 ? R+ 0:00 ps ax
Docker Management CommandsDocker Management Commands Command Description docker create image [ command ] docker run image [ command ] create the container = create + start docker start container. . . docker stop container. . . docker kill container. . . docker restart container. . . start the container graceful 2 stop kill (SIGKILL) the container = stop + start docker pause container. . . docker unpause container. . . suspend the container resume the container docker rm [ -f 3 ] container. . . destroy the container
docker run - Run a containerdocker run - Run a container ● create a container and start it ● the container filesystem is initialised from image image ● arg0..argN is the command run inside the container (as PID 1) docker run [ options ] image [ arg0 arg1...] [root@docker-host ~]# docker run centos /bin/hostname f0d0720bd373 [root@docker-host ~]# docker run centos date +%H:%M:%S 17:10:13 [root@docker-host ~]# docker run centos true ; echo $? 0 [root@docker-host ~]# docker run centos false ; echo $? 1
docker run - Foreground mode vs. Detacheddocker run - Foreground mode vs. Detached modemode ● Foreground mode is the default ● stdout and stderr are redirected to the terminal ● docker run propagates the exit code of the main process ● With -d, the container is run in detached mode: ● displays the ID of the container ● returns immediately [root@docker-host ~]# docker run centos date Wed Jun 7 15:35:48 UTC 2017 [root@docker-host ~]# docker run -d centos date 48b66ad5fc30c468ca0b28ff83dfec0d6e001a2f53e3d168bca754ea76d2bc04 [root@docker-host ~]# docker logs 48b66a Tue Jan 20 17:32:16 UTC 2015
docker run - interactive modedocker run - interactive mode ● By default containers are non-interactive ● stdin is closed immediately ● terminal signals are not forwarded 5 $ docker run -t debian bash root@6fecc2e8ab22:/# date ˆC $ ● With -i the container runs interactively ● stdin is usable ● terminal signals are forwarded to the container $ docker run -t -i debian bash root@78ff08f46cdb:/# date Tue Jan 20 17:52:01 UTC 2015 root@78ff08f46cdb:/# ˆC root@78ff08f46cdb:/#
docker run - Set the container namedocker run - Set the container name ● --name option, assigns a name for the container (by default a random name is generated adjective name)→ [root@docker-host ~]# docker run -d -t debian da005df0d3aca345323e373e1239216434c05d01699b048c5ff277dd691ad535 [root@docker-host ~]# docker run -d -t --name blahblah debian 0bd3cb464ff68eaf9fc43f0241911eb207fefd9c1341a0850e8804b7445ccd21 [root@docker-host ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED .. NAMES 0bd3cb464ff6 debian:7.5 "/bin/bash" 6 seconds ago blahblah Da005df0d3ac debian:7.5 "/bin/bash" About a minute ago focused_raman [root@docker-host ~]# docker stop blahblah focused_raman ● Note: Names must be unique [root@docker-host ~]# docker run --name blahblah debian true 2015/01/20 19:31:21 Error response from daemon: Conflict, The name blahblah is already assigned to 0bd3cb464ff6. You have to delete (or rename) that container to be able to assign blahblah to a container again.
Inspecting the containerInspecting the container Command Description docker ps list running containers docker ps -a list all containers docker logs [ -f 6 ] container show the container output (stdout+stderr) docker top container [ ps options ] list the processes running inside the containers docker diff container show the differences with the image (modified files) docker inspect container. . show low-level infos (in json format)
Interacting with the containerInteracting with the container Command Description docker attach container attach to a running container (stdin/stdout/stderr) docker cp container:path hostpath|- docker cp hostpath|- container:path copy files from the container copy files into the container docker export container export the content of the container (tar archive) docker exec container args. . . run a command in an existing container (useful for debugging) docker wait container wait until the container terminates and return the exit code docker commit container image commit a new docker image (snapshot of the container)
[root@docker-host ~]# docker run --name my-container -t -i debian root@3b397d383faf:/# cat >> /etc/bash.bashrc <<EOF > echo 'hello!' > EOF root@3b397d383faf:/# exit [root@docker-host ~]# docker start --attach my-container my-container hello! root@3b397d383faf:/# exit [root@docker-host ~]# docker diff my-container C /etc C /etc/bash.bashrc A /.bash_history C /tmp [root@docker-host ~]# docker commit my-container hello a57e91bc3b0f5f72641f19cab85a7f3f860a1e5e9629439007c39fd76f37c5dd [root@docker-host ~]# docker stop my-container; docker rm my-container my-container [root@docker-host ~]# docker run --rm -t -i hello hello! root@386ed3934b44:/# exit [root@docker-host ~]# docker images --all REPOSITORY TAG IMAGE ID CREATED SIZE debian latest a25c1eed1c6f Less than a second ago 123MB hello latest 52442a43a78b 59 seconds ago 123MB centos latest 3bee3060bfc8 46 hours ago 193MB ubuntu latest 7b9b13f7b9c0 4 days ago 118MB Lab: Docker commit exampleLab: Docker commit example
Inputs/OutputsInputs/Outputs ● External volumes (persistent data)External volumes (persistent data) ● Devices & LinksDevices & Links ● Publishing ports (NAT)Publishing ports (NAT)
docker run - Mount external volumesdocker run - Mount external volumes ● -v mounts the location /hostpath from the host filesystem at the ● location /containerpath inside the container ● With the “:ro” suffix, the mount is read-only ● Purposes: ● store persistent data outside the container ● provide inputs: data, config files, . . . (read-only mode) ● inter-process communicattion (unix sockets, named pipes) docker run -v /hostpath:/containerpath[:ro] ...
Lab: Mount examplesLab: Mount examples ● Persistent data [root@docker-host ~]# docker run --rm -t -i -v /tmp/persistent:/persistent debian root@0aeedfeb7bf9:/# echo "blahblah" >/persistent/foo root@0aeedfeb7bf9:/# exit [root@docker-host ~]# cat /tmp/persistent/foo blahblah [root@docker-host ~]# docker run --rm -t -i -v /tmp/persistent:/persistent debian root@6c8ed008c041:/# cat /persistent/foo blahblah [root@docker-host ~]# mkdir /tmp/inputs [root@docker-host ~]# echo hello > /tmp/inputs/bar [root@docker-host ~]# docker run --rm -t -i -v /tmp/inputs:/inputs:ro debian root@05168a0eb322:/# cat /inputs/bar hello root@05168a0eb322:/# touch /inputs/foo touch: cannot touch `/inputs/foo': Read-only file system ● Inputs (read-only volume)
Lab: Mount examples continue ...Lab: Mount examples continue ... ● Named pipe [root@docker-host ~]# mkfifo /tmp/fifo [root@docker-host ~]# docker run -d -v /tmp/fifo:/fifo debian sh -c 'echo blah blah> /fifo' ff0e44c25e10d516ce947eae9168060ee25c2a906f62d63d9c26a154b6415939 [root@docker-host ~]# cat /tmp/fifo blah blah [root@docker-host ~]# docker run --rm -t -i -v /dev/log:/dev/log debian root@56ec518d3d4e:/# logger blah blah blah root@56ec518d3d4e:/# exit [root@docker-host ~]# cat /var/log/messages | grep blah Oct 17 15:39:39 docker-host root: blah blah blah ● Unix socket
docker run-inter-container links (legacy links )docker run-inter-container links (legacy links ) ● Containers cannot be assigned a static IP address (by design) → service discovery is a must Docker “links” are the most basic way to discover a service docker run --link ctr:alias . . . ● → container ctr will be known as alias inside the new container [root@docker-host ~]# docker run --name my-server debian sh -c 'hostname -i && sleep 500' & 172.17.0.4 [root@docker-host ~]# docker run --rm -t -i --link my-server:srv debian root@d752180421cc:/# ping srv PING srv (172.17.0.4): 56 data bytes 64 bytes from 172.17.0.4: icmp_seq=0 ttl=64 time=0.195 ms
Link ExampleLink Example
User-defined networks (since v1.9.0)User-defined networks (since v1.9.0) ● by default new containers are connected to the main network (named “bridge”, 172.17.0.0/16) ● the user can create additional networks: docker network create NETWORK ● newly created containers are connected to one network: docker run -t --name test-network --net=NETWORK debian ● container may be dynamically attached/detached to any ● network: docker inspect test-network | grep -i NETWORK docker network list docker network connect NETWORK test-network docker network connect bridge test-network docker network disconnect NETWORK test-network ● networks are isolated from each other, communications is possible by attaching a container to multiple networks
docker run - Publish a TCP portdocker run - Publish a TCP port ● Containers are deployed in a private network, they are not ● reachable from the outside (unless a redirection is set up) docker run -p [ipaddr:]hostport:containerport docker run -t -p 9000:9000 debian ● → redirect incoming connections to the TCP port hostport of the host to the TCP port containerport of the container ● The listening socket binds to 0.0.0.0 (all interfaces) by default or to ipaddr if given
Publish examplePublish example ● bind to all host addresses [root@docker-host ~]# docker run -d -p 80:80 nginx 52c9105e1520980d49ed00ecf5f0ca694d177d77ac9d003b9c0b840db9a70d62 [root@docker-host ~]# docker inspect 6174f6951f19 | grep IPAddress [root@docker-host ~]# wget -nv http://localhost/ 2016-01-12 18:32:52 URL:http://localhost/ [612/612] -> "index.html" [1] [root@docker-host ~]# wget -nv http://172.17.0.2/ 2016-01-12 18:33:14 URL:http://172.17.0.2/ [612/612] -> "index.html" [1] [root@docker-host ~]# docker run -d -p 127.0.0.1:80:80 nginx 4541b43313b51d50c4dc2722e741df6364c5ff50ab81b828456ca55c829e732c [root@docker-host ~]# wget -nv http://localhost/ 2016-01-12 18:37:10 URL:http://localhost/ [612/612] -> "index.html.1" [1] [root@docker-host ~]# wget http://172.17.0.2/ --2016-01-12 18:38:32-- http://172.17.0.2/ Connecting to 172.17.42.1:80... failed: Connection refused. ● bind to 127.0.0.1
Managing docker imagesManaging docker images ● Docker imagesDocker images ● Image management commandsImage management commands ● Example: images & containersExample: images & containers
Docker imagesDocker images A docker image is a snapshot of the filesystem + some metadata ● immutable ● copy-on-write storage ● for instantiating containers ● for creating new versions of the image (multiple layers) ● identified by a unique hex ID (hashed from the image content) ● may be tagged 8 with a human-friendly name eg: debian:wheezy debian:jessie debian:latest
Image management commandsImage management commands Command Description docker images docker history image docker inspect image. . . list all local images show the image history (list of ancestors) show low-level infos (in json format) docker tag image tag tag an image docker commit container image docker import url|- [tag] create an image (from a container) create an image (from a tarball) docker rmi image. . . delete images
Example: images & containersExample: images & containers Scracth
Example: images & containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c docker pull image Img:latest
Example: images & containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c docker run –-name ctr2 img Img:latest ctr1 ctr2
Example: images & containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c docker run –-name ctr3 img Img:latest ctr2 ctr3ctr1
Example: images & containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c docker rm ctr1 Img:latest ctr2 ctr3
Example: images & containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c docker commit ctr2 Img:latest ctr2 ctr3as2889klsy30
Example: images & containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c docker commit ctr2 img:bis Img:latest ctr2 ctr3as2889klsy30 7172ahsk9212 Img:bis
Example: images & containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c docker run ctr4 img Img:latest ctr2 ctr3as2889klsy30 7172ahsk9212 Img:bis ctr4
Example: images & containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c docker run –-name ctr5 img:bis Img:latest ctr2 ctr3as2889klsy30 7172ahsk9212 Img:bis ctr4 ctr5
Example: images & containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c Docker rm ctr2 ctr3 Img:latest as2889klsy30 7172ahsk9212 Img:bis ctr4 ctr5
Example: images & containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c Docker commit ctr4 img Img:latest as2889klsy30 7172ahsk9212 Img:bis ctr4 ctr5abcd1234efgh
Example: images & containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c Docker run --name ctr6 img Img:latest as2889klsy30 7172ahsk9212 Img:bis ctr4 ctr5abcd1234efgh ctr6
Example: images & containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c Docker rm ctr4 Img:latest as2889klsy30 7172ahsk9212 Img:bis ctr5abcd1234efgh ctr6
Example: images & containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c Docker rm ctr6 Img:latest as2889klsy30 7172ahsk9212 Img:bis ctr5abcd1234efgh
Example: images & containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c Docker rmi img 7172ahsk9212 Img:bis ctr5
Example: images & containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c Docker rmi img:bis Error: img:bis is reference by ctr5 7172ahsk9212 Img:bis ctr5
Example: images & containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c Docker rmi -f img:bis 7172ahsk9212 ctr5
Example: images & containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c Docker rm ctr5 7172ahsk9212
Images tagsImages tags ● A docker tag is made of two parts: “REPOSITORY:TAG” ● The TAG part identifies the version of the image. If not provided, the default is “:latest” [root@docker-host ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE debian latest a25c1eed1c6f Less than a second ago 123MB hello latest 52442a43a78b 13 minutes ago 123MB centos latest 3bee3060bfc8 46 hours ago 193MB ubuntu latest 7b9b13f7b9c0 5 days ago 118MB nginx latest 958a7ae9e569 7 days ago 109MB
Image transfer commandsImage transfer commands Using the registry API docker pull repo[:tag]. . . docker push repo[:tag]. . . docker search text pull an image/repo from a registry push an image/repo from a registry search an image on the official registry docker login . . . docker logout . . . login to a registry logout from a registry Manual transfer docker save repo[:tag]. . . docker load export an image/repo as a tarbal load images from a tarball docker-ssh . . . proposed script to transfer images between two daemons over ssh
Transferring imagesTransferring images
Lab: Image creation from a containerLab: Image creation from a container Let’s start by running an interactive shell in a ubuntu container. [root@docker-host]# curl -fsSL https://get.docker.com/ | sh [root@docker-host]# systemctl start docker [root@docker-host]# systemctl status docker [root@docker-host]# systemctl enable docker [root@docker-host ~]# docker run -ti ubuntu bash Install the figlet package in this container root@880998ce4c0f:/# apt-get update -y ; apt-get install figlet root@880998ce4c0f:/# exit Get the ID of this container using the ls command (do not forget the -a option as the non running container are not returned by the ls command). [root@docker-host]# docker ps -a Run the following command, using the ID retreived, in order to commit the container and create an image out of it. [root@docker-host ~]# docker commit 880998ce4c0f sha256:1a769da2b98b04876844f96594a92bd708ca27ee5a8868d43c0aeb5985671161
Lab: Image creation from a containerLab: Image creation from a container Once it has been commited, we can see the newly created image in the list of available images. [root@docker-host ~]# docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE <none> <none> 1a769da2b98b 59 seconds ago 158MB ubuntu latest 7b9b13f7b9c0 6 days ago 118MB From the previous command, get the ID of the newly created image and tag it so it’s named intra-tag. [root@docker-host ~]# docker image tag 1a769da2b98b tag-intra [root@docker-host ~]# docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE tag-intra latest 1a769da2b98b 3 minutes ago 158MB ubuntu latest 7b9b13f7b9c0 6 days ago 118MB As figlet is present in our tag-intra image, the command ran returns the following output. [root@docker-host ~]# docker container run tag-intra figlet hello _ _ _ | |__ ___| | | ___ | '_ / _ | |/ _ | | | | __/ | | (_) | |_| |_|___|_|_|___/
Docker builderDocker builder ● What is the Docker builderWhat is the Docker builder ● DockerFileDockerFile ● Docker-ComposeDocker-Compose ● Introduction to KomposeIntroduction to Kompose
Docker imagesDocker images ● Docker’s builder relies on ● a DSL describing how to build an image ● a cache for storing previous builds and have quick iterations ● The builder input is a context, i.e. a directory containing: ● a file named Dockerfile which describe how to build the container ● possibly other files to be used during the build
Dockerfile formatDockerfile format ● comments start with “#” ● commands fit on a single line (possibly continuated with ) ● first command must be a FROM (indicates the parent image or scratch to start from scratch)
Build an imageBuild an image ● build an image from the context located at path and optionally tag it as tag docker build [ -t tag ] path ● The command: 1. makes a tarball from the content 10 of path 2. uploads the tarball to the docker daemon which will: 2.1 execute the content of Dockerfile, committing an intermediate image after each command 2.2 (if requested) tag the final image as tag
Dockerfile DescriptionDockerfile Description ● Each Dockerfile is a script, composed of various commands (instructions) and arguments listed successively to automatically perform actions on a base image in order to create (or form) a new one. They are used for organizing things and greatly help with deployments by simplifying the process start-to-finish. ● Dockerfiles begin with defining an image FROM which the build process starts. Followed by various other methods, commands and arguments (or conditions), in return, provide a new image which is to be used for creating docker containers. ● They can be used by providing a Dockerfile's content - in various ways - to the docker daemon to build an image (as explained in the "How To Use" section).
Dockerfile Commands (Instructions)Dockerfile Commands (Instructions) ● ADD ● The ADD command gets two arguments: a source and a destination. It basically copies the files from the source on the host into the container's own filesystem at the set destination. If, however, the source is a URL (e.g. http://github.com/user/file/), then the contents of the URL are downloaded and placed at the destination. ● Example # Usage: ADD [source directory or URL] [destination directory] ADD /my_app_folder /my_app_folder
Dockerfile Commands (Instructions)Dockerfile Commands (Instructions) ● CMD ● The command CMD, similarly to RUN, can be used for executing a specific command. However, unlike RUN it is not executed during build, but when a container is instantiated using the image being built. Therefore, it should be considered as an initial, default command that gets executed (i.e. run) with the creation of containers based on the image. ● To clarify: an example for CMD would be running an application upon creation of a container which is already installed using RUN (e.g. RUN apt-get install …) inside the image. This default application execution command that is set with CMD becomes the default and replaces any command which is passed during the creation. ● Example # Usage 1: CMD application "argument", "argument", .. CMD "echo" "Hello docker!"
Dockerfile Commands (Instructions)Dockerfile Commands (Instructions) ● ENTRYPOINT ● ENTRYPOINT argument sets the concrete default application that is used every time a container is created using the image. For example, if you have installed a specific application inside an image and you will use this image to only run that application, you can state it with ENTRYPOINT and whenever a container is created from that image, your application will be the target. ● If you couple ENTRYPOINT with CMD, you can remove "application" from CMD and just leave "arguments" which will be passed to the ENTRYPOINT. ● Example: # Usage: ENTRYPOINT application "argument", "argument", .. # Remember: arguments are optional. They can be provided by CMD # or during the creation of a container. ENTRYPOINT echo # Usage example with CMD: # Arguments set with CMD can be overridden during *run* CMD "Hello docker!" ENTRYPOINT echo
Dockerfile Commands (Instructions)Dockerfile Commands (Instructions) ● ENV ● The ENV command is used to set the environment variables (one or more). These variables consist of “key = value” pairs which can be accessed within the container by scripts and applications alike. This functionality of docker offers an enormous amount of flexibility for running programs. ● Example: # Usage: ENV key value ENV SERVER_WORKS 4 ● EXPOSE ● The EXPOSE command is used to associate a specified port to enable networking between the running process inside the container and the outside world (i.e. the host). ● Example: # Usage: EXPOSE [port] EXPOSE 8080
Dockerfile Commands (Instructions)Dockerfile Commands (Instructions) ● FROM ● FROM directive is probably the most crucial amongst all others for Dockerfiles. It defines the base image to use to start the build process. It can be any image, including the ones you have created previously. If a FROM image is not found on the host, docker will try to find it (and download) from the docker image index. It needs to be the first command declared inside a Dockerfile. ● Example: # Usage: FROM [image name] FROM ubuntu
Dockerfile Commands (Instructions)Dockerfile Commands (Instructions) ● MAINTAINER ● One of the commands that can be set anywhere in the file - although it would be better if it was declared on top - is MAINTAINER. This non- executing command declares the author, hence setting the author field of the images. It should come nonetheless after FROM. ● Example: # Usage: MAINTAINER [name] MAINTAINER authors_name ● RUN ● The RUN command is the central executing directive for Dockerfiles. It takes a command as its argument and runs it to form the image. Unlike CMD, it actually is used to build the image (forming another layer on top of the previous one which is committed). ● Example # Usage: RUN [command] RUN aptitude install -y riak
Dockerfile Commands (Instructions)Dockerfile Commands (Instructions) ● USER ● The USER directive is used to set the UID (or username) which is to run the container based on the image being built. ● Example: # Usage: USER [UID] USER 751 ● VOLUME ● The VOLUME command is used to enable access from your container to a directory on the host machine (i.e. mounting it). ● Example: # Usage: VOLUME ["/dir_1", "/dir_2" ..] VOLUME ["/my_files"] ● WORKDIR ● The WORKDIR directive is used to set where the command defined with CMD is to be executed. ● Example: # Usage: WORKDIR /path WORKDIR ~/
Summary Builder main commandsSummary Builder main commands Command Description FROM image|scratch base image for the build MAINTAINER email name of the mainainer (metadata) COPY path dst copy path from the context into the container at location dst ADD src dst same as COPY but untar archives and accepts http urls RUN args. . . run an arbitrary command inside the container USER name set the default username WORKDIR path set the default working directory CMD args. . . set the default command ENV name value set an environment variable
Dockerfile exampleDockerfile example ● How to Use Dockerfiles ● Using the Dockerfiles is as simple as having the docker daemon run one. The output after executing the script will be the ID of the new docker image. ● Usage: # Build an image using the Dockerfile at current location # Example: sudo docker build -t [name] . [root@docker-host ~]# docker build -t nginx_yusuf .
Lab: Dockerfile exampleLab: Dockerfile example ● build an image from the context located at path and optionally tag it as tag ############################################################ # Dockerfile to build nginx container images # Based on debian latest version ############################################################ # base image: last debian release FROM debian:latest # name of the maintainer of this image MAINTAINER yusuf.hadiwinata@gmail.com # install the latest upgrades RUN apt-get update && apt-get -y dist-upgrade && echo yusuf-test > /tmp/test # install nginx RUN apt-get -y install nginx # set the default container command # −> run nginx in the foreground CMD ["nginx", "-g", "daemon off;"] # Tell the docker engine that there will be somenthing listening on the tcp port 80 EXPOSE 80
Lab: Build & Run Dockerfile exampleLab: Build & Run Dockerfile example [root@docker-host nginx_yusuf]# docker build -t nginx_yusuf . Sending build context to Docker daemon 2.56kB Step 1/6 : FROM debian:latest ---> a25c1eed1c6f Step 2/6 : MAINTAINER yusuf.hadiwinata@gmail.com ---> Running in 94409ebe59ac ---> eaefc54975b7 Removing intermediate container 94409ebe59ac Step 3/6 : RUN apt-get update && apt-get -y dist-upgrade ---> Running in 425285dbf037 Get:1 http://security.debian.org jessie/updates InRelease [63.1 kB] Ign http://deb.debian.org jessie InRelease Get:2 http://deb.debian.org jessie-updates InRelease [145 kB] Get:3 http://deb.debian.org jessie Release.gpg [2373 B] Get:4 http://deb.debian.org jessie-updates/main amd64 Packages [17.6 kB] Get:5 http://security.debian.org jessie/updates/main amd64 Packages [521 kB] Get:6 http://deb.debian.org jessie Release [148 kB] Get:7 http://deb.debian.org jessie/main amd64 Packages [9065 kB] ------------------- DIPOTONG ------------------------- Processing triggers for sgml-base (1.26+nmu4) ... ---> 88795938427f Removing intermediate container 431ae6bc8e0a Step 5/6 : CMD nginx -g daemon off; ---> Running in 374ff461f187 ---> 08e1433ccd68 Removing intermediate container 374ff461f187 Step 6/6 : EXPOSE 80 ---> Running in bac435c454a8 ---> fa8de9e81136 Removing intermediate container bac435c454a8 Successfully built fa8de9e81136 Successfully tagged nginx_yusuf:latest
Lab: Dockerfile exampleLab: Dockerfile example ● Using the image we have build, we can now proceed to the final step: creating a container running a nginx instance inside, using a name of our choice (if desired with -name [name]). ● Note: If a name is not set, we will need to deal with complex, alphanumeric IDs which can be obtained by listing all the containers using sudo docker ps -l. [root@docker-host nginx_yusuf]# docker run --name my_first_nginx_instance -i -t nginx_yusuf bash root@4b90e5d6dda8:/# root@4b90e5d6dda8:/# root@4b90e5d6dda8:/# cat /tmp/test yusuf-test
Docker ComposeDocker Compose Manage a collection of containersManage a collection of containers
Dockerfile vs Docker ComposeDockerfile vs Docker Compose which is better?which is better? Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application’s services. Then, using a single command, you create and start all the services from your configuration. To learn more about all the features of Compose see the list of features. Compose is great for development, testing, and staging environments, as well as CI workflows. You can learn more about each case in Common Use Cases. ● Using Compose is basically a three-step process. ● Define your app’s environment with a Dockerfile so it can be reproduced anywhere. ● Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment. ● Lastly, run docker-compose up and Compose will start and run your entire app.
Lab: Docker Compose Ubuntu, php7-fpm,Lab: Docker Compose Ubuntu, php7-fpm, Nginx and MariaDB ExampleNginx and MariaDB Example ● Clone or download the sample project from Github. [root@docker-host ~]# git clone https://github.com/isnuryusuf/docker-php7.git Cloning into 'docker-php7'... remote: Counting objects: 62, done. remote: Total 62 (delta 0), reused 0 (delta 0), pack-reused 62 Unpacking objects: 100% (62/62), done. 960 B [root@docker-host ~]# cd docker-php7 [root@docker-host ~]# yum -y install epel-release [root@docker-host ~]# yum install -y python-pip [root@docker-host ~]# pip install docker-compose [root@docker-host docker-php7]# docker-compose up This lab will setup PHP application using Docker and docker- compose. This will setup a developement environment with PHP7- fpm, MariaDB and Nginx.
Docker Compose Ubuntu, php7-fpm, NginxDocker Compose Ubuntu, php7-fpm, Nginx and MariaDB Exampleand MariaDB Example ● Inside docker-php7 have a directory structure like this. ├── app │ └── public │ └── index.php ├── database ├── docker-compose.yml ├── fpm │ ├── Dockerfile │ └── supervisord.conf ├── nginx │ ├── Dockerfile │ └── default.conf ● app - Our application will be kept in this directory. ● database is where MariaDB will store all the database files. ● fpm folder contains the Dockerfile for php7-fpm container and the Supervisord config ● nginx folder contains the Dockerfile for nginx and the default nginx config which will be copied to the container. ● docker-compose.yml - Our docker-compose configuration. In this file, we define the containers and services that we want to start, along with associated volumes, ports, etc. When we run docker-compose up, it reads this file and builds the images.
Docker Compose Ubuntu, php7-fpm, NginxDocker Compose Ubuntu, php7-fpm, Nginx and MariaDB Exampleand MariaDB Example ● docker-compose.yml version: "2" services: nginx: build: context: ./nginx ports: - "8080:80" volumes: - ./app:/var/app fpm: build: context: ./fpm volumes: - ./app:/var/app expose: - "9000" environment: - "DB_HOST=db" - "DB_DATABASE=laravel" db: image: docker-registry:5000/mariadb:latest environment: - MYSQL_ROOT_PASSWORD=root - MYSQL_DATABASE=laravel volumes: - ./database:/var/lib/mysql
Docker Compose Ubuntu, php7-fpm, NginxDocker Compose Ubuntu, php7-fpm, Nginx and MariaDB Exampleand MariaDB Example ● Docker compose up output command [root@docker-host docker-php7]# docker-compose up Creating network "dockerphp7_default" with the default driver Building fpm Step 1 : FROM ubuntu:latest Trying to pull repository docker.io/library/ubuntu ... sha256:ea1d854d38be82f54d39efe2c67000bed1b03348bcc2f3dc094f260855dff368: Pulling from docker.io/library/ubuntu bd97b43c27e3: Pull complete 6960dc1aba18: Pull complete 2b61829b0db5: Pull complete 1f88dc826b14: Pull complete 73b3859b1e43: Pull complete Digest: sha256:ea1d854d38be82f54d39efe2c67000bed1b03348bcc2f3dc094f260855dff368 Status: Downloaded newer image for docker.io/ubuntu:latest ---> 7b9b13f7b9c0 Step 2 : RUN apt-get update && apt-get install -y software-properties-common language- pack-en-base && LC_ALL=en_US.UTF-8 add-apt-repository -y ppa:ondrej/php && apt- get update && apt-get install -y php7.0 php7.0-fpm php7.0-mysql mcrypt php7.0-gd curl php7.0-curl php-redis php7.0-mbstring sendmail supervisor && mkdir /run/php && apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* ---> Running in 812423cbbeac
Docker Compose Ubuntu, php7-fpm, NginxDocker Compose Ubuntu, php7-fpm, Nginx and MariaDB Exampleand MariaDB Example
Introduction to KomposeIntroduction to Kompose Why developers like Docker Compose? ● Simple (to learn and adopt) ● Very easy to run containerised applications ● One line command ● Local development ● Declarative ● Great UX ● Developer friendly
Introduction to KomposeIntroduction to Kompose Devs say this about Kubernetes/OpenShift ● Many new concepts to learn ● Pods, Deployment, RC, RS, Job, DaemonSets, Routes/Ingress, Volumes ... phew!!! ● Complex / complicated ● Difficult ● Difficult/Complicated UX (especially when getting started) ● Setting up local development requires work ● Not developer friendly
Introduction to KomposeIntroduction to Kompose How do we bridge this gap? ● We saw an opportunity here! ● Can we reduce the learning curve? ● Can we make adopting Kubernetes/OpenShift simpler? ● What can we do make it simple? ● Which bits need to be made simple? ● How can we make it simple?
Introduction to KomposeIntroduction to Kompose Enter Kompose! (Kompose with a “K”) Docker Compose to OpenShift in *one* command More info: http://kompose.io/
Docker ManagementDocker Management ● Portainer.ioPortainer.io
The Easiest Way to Manage DockerThe Easiest Way to Manage Docker Portainer is OpenSource lightweigh Management UI whichPortainer is OpenSource lightweigh Management UI which allows you to easly manage your docker Host or Swarmallows you to easly manage your docker Host or Swarm ClusterCluster Available on Linux, Windows & OSXAvailable on Linux, Windows & OSX
Installing Portainer.ioInstalling Portainer.io ● Portainer runs as a lightweight Docker container (the Docker image weights less than 4MB) on a Docker engine or Swarm cluster. Therefore, you are one command away from running container on any machine using Docker. ● Use the following Docker command to run Portainer: [root@docker-host ~]# docker volume create portainer_data [root@docker-host ~]# docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data docker- registry:5000/portainer/portainer:latest ● you can now access Portainer by pointing your web browser at http://DOCKER_HOST:9000 Ensure you replace DOCKER_HOST with address of your Docker host where Portainer is running.
Manage a new endpointManage a new endpoint ● After your first authentication, Portainer will ask you information about the Docker endpoint you want to manage. You’ll have the following choices: ● Not available for Windows Containers (Windows Server 2016) - Manage the local engine where Portainer is running (you’ll need to bind mount the Docker socket via -v /var/run/docker.sock:/var/run/docker.sock on the Docker CLI when running Portainer) ● Manage a remote Docker engine, you’ll just have to specify the url to your Docker endpoint, give it a name and TLS info if needed
Declare initial endpoint via CLIDeclare initial endpoint via CLI ● You can specify the initial endpoint you want Portainer to manage via the CLI, use the -H flag and the tcp:// protocol to connect to a remote Docker endpoint: [root@docker-host ~]# docker run -d -p 9000:9000 portainer/portainer -H tcp://<REMOTE_HOST>:<REMOTE_PORT> ● Ensure you replace REMOTE_HOST and REMOTE_PORT with the address/port of the Docker engine you want to manage. ● You can also bind mount the Docker socket to manage a local Docker engine (not available for Windows Containers (Windows Server 2016)): [root@docker-host ~]# docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer ● Note: If your host is using SELinux, you’ll need to pass the --privileged flag to the Docker run command:
Portainer Web Console - InitializationPortainer Web Console - Initialization
Portainer 1Portainer 1stst AccessAccess
Portainer.io Manage LocallyPortainer.io Manage Locally ● Ensure that you have started Portainer container with the following Docker flag -v "/var/run/docker.sock:/var/run/docker.sock" [root@docker-host ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES afd62e28aee5 portainer/portainer "/portainer" 5 minutes ago Up 5 minutes 0.0.0.0:9000->9000/tcp adoring_brown 60fec4b9a9bf centos "bash" 21 minutes ago Up 21 minutes high_kare [root@docker-host ~]# docker stop afd62e28aee5 afd62e28aee5 [root@docker-host ~]# docker run -v "/var/run/docker.sock:/var/run/docker.sock" -d -p 9000:9000 portainer/portainer db232db974fa6c5a232f0c2ddfc0404dfac6bd34c087934c4c51b0208ececf0f
Portainer.io DashboardPortainer.io Dashboard
Portainer Documentation URLPortainer Documentation URL https://portainer.readthedocs.io/
Docker OrchestrationDocker Orchestration ● Docker MachineDocker Machine ● Docker SwarmDocker Swarm ● Docker ComposeDocker Compose ● Kubernetes & OpenshiftKubernetes & Openshift
Docker / Container ProblemsDocker / Container Problems We need more than just packing and isolationWe need more than just packing and isolation • SchedulingScheduling : Where should my containers run?: Where should my containers run? • Lifecycle and health : Keep my containers running despite failuresLifecycle and health : Keep my containers running despite failures • DiscoveryDiscovery : Where are my containers now?: Where are my containers now? • MonitoringMonitoring : What’s happening with my containers?: What’s happening with my containers? • Auth{n,z}Auth{n,z} : Control who can do things to my containers: Control who can do things to my containers • AggregatesAggregates : Compose sets of containers into jobs: Compose sets of containers into jobs • ScalingScaling : Making jobs bigger or smaller: Making jobs bigger or smaller
Docker MachineDocker Machine Abstraction for provisionning and using docker hostsAbstraction for provisionning and using docker hosts
Docker SwarmDocker Swarm Manage a cluster of docker hostsManage a cluster of docker hosts
Swarm mode overviewSwarm mode overview ● Feature highlights ● Cluster management integrated with Docker Engine: Use the Docker Engine CLI to create a swarm of Docker Engines where you can deploy application services. You don’t need additional orchestration software to create or manage a swarm. ● Decentralized design: Instead of handling differentiation between node roles at deployment time, the Docker Engine handles any specialization at runtime. You can deploy both kinds of nodes, managers and workers, using the Docker Engine. This means you can build an entire swarm from a single disk image. ● Declarative service model: Docker Engine uses a declarative approach to let you define the desired state of the various services in your application stack. For example, you might describe an application comprised of a web front end service with message queueing services and a database backend.
Swarm mode overviewSwarm mode overview ● Feature highlights ● Scaling: For each service, you can declare the number of tasks you want to run. When you scale up or down, the swarm manager automatically adapts by adding or removing tasks to maintain the desired state. ● Desired state reconciliation: The swarm manager node constantly monitors the cluster state and reconciles any differences between the actual state and your expressed desired state. For example, if you set up a service to run 10 replicas of a container, and a worker machine hosting two of those replicas crashes, the manager will create two new replicas to replace the replicas that crashed. The swarm manager assigns the new replicas to workers that are running and available. ● Multi-host networking: You can specify an overlay network for your services. The swarm manager automatically assigns addresses to the containers on the overlay network when it initializes or updates the application.
Swarm mode overviewSwarm mode overview ● Feature highlights ● Service discovery: Swarm manager nodes assign each service in the swarm a unique DNS name and load balances running containers. You can query every container running in the swarm through a DNS server embedded in the swarm. ● Load balancing: You can expose the ports for services to an external load balancer. Internally, the swarm lets you specify how to distribute service containers between nodes. ● Secure by default: Each node in the swarm enforces TLS mutual authentication and encryption to secure communications between itself and all other nodes. You have the option to use self-signed root certificates or certificates from a custom root CA. ● Rolling updates: At rollout time you can apply service updates to nodes incrementally. The swarm manager lets you control the delay between service deployment to different sets of nodes. If anything goes wrong, you can roll-back a task to a previous version of the service.
[root@docker-host]# curl -fsSL https://get.docker.com/ | sh [root@docker-host]# systemctl start docker [root@docker-host]# systemctl status docker [root@docker-host]# systemctl enable docker [root@docker-host]# docker swarm init Swarm initialized: current node (73yn8s77g2xz3277f137hye41) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-0xg56f2v9tvy0lg9d4j7xbf7cf1mg8ylm7d19f39gqvc41d1yk- 0trhxa6skixvif1o6pultvcp3 10.7.60.26:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. ● Create a Docker Swarm first [root@docker-host ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 73yn8s77g2xz3277f137hye41 * docker-host Ready Active Leader ● Show members of swarm From the first terminal, check the number of nodes in the swarm (running this command from the second terminal worker will fail as swarm related commands need to be issued against a swarm manager). Docker Swarm Lab - Init yourDocker Swarm Lab - Init your swarmswarm
Docker Swarm - Clone theDocker Swarm - Clone the voting-appvoting-app [root@docker-host ~]# git clone https://github.com/docker/example-voting-app Cloning into 'example-voting-app'... remote: Counting objects: 374, done. remote: Total 374 (delta 0), reused 0 (delta 0), pack-reused 374 Receiving objects: 100% (374/374), 204.32 KiB | 156.00 KiB/s, done. Resolving deltas: 100% (131/131), done. [root@docker-host ~]# cd example-voting-app ● Let’s retrieve the voting app code from Github and go into the application folder. ● Ensure you are in the first terminal and do the below:
Docker Swarm - Deploy a stackDocker Swarm - Deploy a stack [root@docker-host]# curl -fsSL https://get.docker.com/ | sh [root@docker-host]# systemctl start docker [root@docker-host]# systemctl status docker [root@docker-host]# systemctl enable docker [root@docker-host]# docker stack deploy --compose-file=docker-stack.yml voting_stack ● A stack is a group of service that are deployed together. The docker- stack.yml in the current folder will be used to deploy the voting app as a stack. ● Ensure you are in the first terminal and do the below: [root@docker-host ~]# docker stack ls NAME SERVICES voting_stack 6 ● Check the stack deployed from the first terminal
Docker Swarm - Deploy a stackDocker Swarm - Deploy a stack [root@docker-host ~]# docker stack services voting_stack ID NAME MODE REPLICAS IMAGE 10rt1wczotze voting_stack_visualizer replicated 1/1 dockersample s/visualizer:stable 8lqj31k3q5ek voting_stack_redis replicated 2/2 redis:alpine nhb4igkkyg4y voting_stack_result replicated 2/2 dockersample s/examplevotingapp_result:before nv8d2z2qhlx4 voting_stack_db replicated 1/1 postgres:9.4 ou47zdyf6cd0 voting_stack_vote replicated 2/2 dockersample s/examplevotingapp_vote:before rpnxwmoipagq voting_stack_worker replicated 1/1 dockersample s/examplevotingapp_worker:latest ● Check the service within the stack [root@docker-host ~]# docker service ps voting_stack_vote ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS my7jqgze7pgg voting_stack_vote.1 dockersamples/examplevotingapp_vote:be fore node1 Running Running 56 seconds ago 3jzgk39dyr6d voting_stack_vote.2 dockersamples/examplevotingapp_vote:be fore node2 Running Running 58 seconds ago ● Check the service within the stack
Docker Swarm - CreatingDocker Swarm - Creating servicesservices [root@docker-host ~]# docker service create -p 80:80 --name web nginx:latest [root@docker-host example-voting-app]# docker service ls | grep nginx 24jakxhfl06l web replicated 1/1 nginx:latest *:80->80/tcp ● The next step is to create a service and list out the services. This creates a single service called web that runs the latest nginx, type the below commands in the first terminal: [root@docker-host ~]# docker service inspect web [root@docker-host ~]# docker service scale web=15 web scaled to 15 [root@docker-host ~]# docker service ls | grep nginx 24jakxhfl06l web replicated 15/15 nginx:latest *:80->80/tcp ● Scaling Up Application
Docker Swarm - CreatingDocker Swarm - Creating servicesservices [root@docker-host ~]# docker service scale web=10 web scaled to 10 [root@docker-host ~]# docker service ps web ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS jrgkmkvm4idf web.2 nginx:latest docker-host Running Running about a minute ago dmreadcm745k web.4 nginx:latest docker-host Running Running about a minute ago 5iik87rbsfc2 web.6 nginx:latest docker-host Running Running about a minute ago 7cuzpz79q2hp web.7 nginx:latest docker-host Running Running about a minute ago ql7g7k3dlbqw web.8 nginx:latest docker-host Running Running about a minute ago k0bzk7m51cln web.9 nginx:latest docker-host Running Running about a minute ago 0teod07eihns web.10 nginx:latest docker-host Running Running about a minute ago sqxfaqlnkpab web.11 nginx:latest docker-host Running Running about a minute ago mkrsmwgti606 web.12 nginx:latest docker-host Running Running about a minute ago ucomtg454jlk web.15 nginx:latest docker-host Running Running about a minute ago ● Scaling Down Application
Kubernetes is a Solution?Kubernetes is a Solution? Kubernetes – Container Orchestration at ScaleKubernetes – Container Orchestration at Scale Greek for “Helmsman”; also the root of the word “Governor” and “cybernetic” • Container Cluster Manager - Inspired by the technology that runs Google • Runs anywhere - Public cloud - Private cloud - Bare metal • Strong ecosystem - Partners: Red Hat, VMware, CoreOS.. - Community: clients, integration
Kubernetes Resource TypesKubernetes Resource Types ● Pods ● Represent a collection of containers that share resources, such as IP addresses and persistent storage volumes. It is the basic unit of work for Kubernetes. ● Services ● Define a single IP/port combination that provides access to a pool of pods. By default, services connect clients to pods in a round-robin fashion.
Kubernetes Resource TypesKubernetes Resource Types ● Replication Controllers ● A framework for defining pods that are meant to be horizontally scaled. A replication controller includes a pod definition that is to be replicated, and the pods created from it can be scheduled to different nodes. ● Persistent Volumes (PV) ● Provision persistent networked storage to pods that can be mounted inside a container to store data. ● Persistent Volume Claims (PVC) ● Represent a request for storage by a pod to Kubernetes
OpenShift Resource TypesOpenShift Resource Types ● Deployment Configurations (dc) ● Represent a set of pods created from the same container image, managing workflows such as rolling updates. A dc also provides a basic but extensible Continuous Delivery workflow. ● Build Configurations (bc) ● Used by the OpenShift Source-to-Image (S2I) feature to build a container image from application source code stored in a Git server. A bc works together with a dc to provide a basic but extensible Continuous Integration/Continuous Delivery workflow. ● Routes ● Represent a DNS host name recognized by the OpenShift router as an ingress point for applications and microservices.
Kubernetes Solution DetailKubernetes Solution Detail Kubernetes Cluster Registry Master Node Node Storage Pod Volume Node Service Pod Pod Image Core ConceptsCore Concepts Pod • Labels & Selectors • ReplicationController • Service • Persistent Volumes etcd SkyDNS Replication Controller APIDev/Ops Visitor Router Policies Logging ELK
Kubernetes: The PodsKubernetes: The Pods POD Definition: • Group of Containers • Related to each other • Same namespace • Emphemeral Examples: • Wordpress • MySQL • Wordpress + MySQL • ELK • Nginx+Logstash • Auth-Proxy+PHP • App + data-load
Kubernetes: Building PodKubernetes: Building Pod { "apiVersion": "v1", "kind": "Pod", "metadata": { "name": "hello‐openshift" }, "spec": { "containers": [ { "name": "hello‐openshift", "image": "openshift/hello‐openshift", "ports": [ { "containerPort": 8080 } ] } ] } } # kubectl create –f hello-openshift.yaml # oc create –f hello-openshift.yaml ● OpenShift/Kubernetes runs containers inside Kubernetes pods, and to create a pod from a container image, Kubernetes needs a pod resource definition. This can be provided either as a JSON or YAML text file, or can be generated from defaults by oc new-app or the web console. ● This JSON object is a pod resource definition because it has attribute "kind" with value "Pod". It contains a single "container" whose name is "hello- openshift" and that references the "image" named "openshift/hello- openshift". The container also contains a single "ports", which listens to TCP port 8080.
Kubernetes: List PodKubernetes: List Pod [root@centos-16gb-sgp1-01 ~]# oc get pod NAME READY STATUS RESTARTS AGE bgdemo-1-build 0/1 Completed 0 16d bgdemo-1-x0wlq 1/1 Running 0 16d dc-gitlab-runner-service-3-wgn8q 1/1 Running 0 8d dc-minio-service-1-n0614 1/1 Running 5 23d frontend-1-build 0/1 Completed 0 24d frontend-prod-1-gmcrw 1/1 Running 2 23d gitlab-ce-7-kq0jp 1/1 Running 2 24d hello-openshift 1/1 Running 2 24d jenkins-3-8grrq 1/1 Running 12 21d os-example-aspnet-2-build 0/1 Completed 0 22d os-example-aspnet-3-6qncw 1/1 Running 0 21d os-sample-java-web-1-build 0/1 Completed 0 22d os-sample-java-web-2-build 0/1 Completed 0 22d os-sample-java-web-3-build 0/1 Completed 0 22d os-sample-java-web-3-sqf41 1/1 Running 0 22d os-sample-python-1-build 0/1 Completed 0 22d os-sample-python-1-p5b73 1/1 Running 0 22d
Kubernetes: ReplicationKubernetes: Replication ControllerController Kubernetes Cluster Master Node Node Pod Node Pod etcd Replication Controller APIDev/Ops kind: ReplicationController metadata: name: nginx spec: replicas: 2 selector: app: nginx template: metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx:v2.2 ports: - containerPort: 80 “nginx” RC Object • Pod Scaling • Pod Monitoring • Rolling updates # kubectl create –f nginx-rc.yaml
Kubernetes: ServiceKubernetes: Service Kubernetes Cluster MySQL DB MySQL Service Definition: • Load-Balanced Virtual-IP (layer 4) • Abstraction layer for your App • Enables Service Discovery • DNS • ENV Examples: • frontend • database • api 172.16.0.1:3386 PHP 10.1.0.1:3306 10.2.0.1:3306 db.project.cluster.local Visitor <?php mysql_connect(getenv(“db_host”)) mysql_connect(“db:3306”) ?>
Kubernetes: Service Continue..Kubernetes: Service Continue.. MySQL MySQL PHP 10.1.0.1:3306 10.2.0.1:3306 Master Node etcd SkyDNS APIDev/Ops “DB” Service Object Kube Proxy IPTables Kube Proxy IPTables 3. Register Service 3. Register Service 2. Watch Changes 2. Watch Changes RedirectRedirect3. Update Rule 3. Update Rule 2. Watch Changes 2. Watch Changes - apiVersion: v1 kind: Service metadata: labels: app: MySQL role: BE phase: DEV name: MySQL spec: ports: - name: mysql-data port: 3386 protocol: TCP targetPort: 3306 selector: app: MySQL role: BE sessionAffinity: None type: ClusterIP 1. Create Object 1. Create Object 1. Register Pod Object 1. Register Pod Object
Kubernetes: Labels & SelectorsKubernetes: Labels & Selectors Pod Service Pod Pod - apiVersion: v1 kind: Service metadata: labels: app: MyApp role: BE phase: DEV name: MyApp spec: ports: - name: 80-tcp port: 80 protocol: TCP targetPort: 8080 selector: app: MyApp role: BE sessionAffinity: None type: ClusterIP Role: FE Phase: Dev Role: BE Phase: DEV Role: BE Phase: TST Role: BEthink SQL ‘select ... where ...’ - apiVersion: v1 kind: Pod metadata: labels: app: MyApp role: BE phase: DEV name: MyApp
Kubernetes: Ingress / RouterKubernetes: Ingress / Router MySQL Service MySQL • Router Definition: • Layer 7 Load-Balancer / Reverse Proxy • SSL/TLS Termination • Name based Virtual Hosting • Context Path based Routing • Customizable (image) • HA-Proxy • F5 Big-IP Examples: • https://www.i-3.co.id/myapp1/ • http://www.i-3.co.id/myapp2/ 172.16.0.1:3386 PHP 10.1.0.1:3306 10.2.0.1:3306 db.project.cluster.local Visitor Router https://i-3.co.id/service1/apiVersion: extensions/v1beta1 kind: Ingress metadata: name: mysite spec: rules: - host: www.i-3.co.id http: paths: - path: /foo backend: serviceName: s1 servicePort: 80 - path: /bar backend: serviceName: s2 servicePort: 80
Kubernetes: Router DetailKubernetes: Router Detail [root@centos-16gb-sgp1-01 ~]# oc env pod router-1-b97bv --list # pods router-1-b97bv, container router DEFAULT_CERTIFICATE_DIR=/etc/pki/tls/private ROUTER_EXTERNAL_HOST_HOSTNAME= ROUTER_EXTERNAL_HOST_HTTPS_VSERVER= ROUTER_EXTERNAL_HOST_HTTP_VSERVER= ROUTER_EXTERNAL_HOST_INSECURE=false ROUTER_EXTERNAL_HOST_INTERNAL_ADDRESS= ROUTER_EXTERNAL_HOST_PARTITION_PATH= ROUTER_EXTERNAL_HOST_PASSWORD= ROUTER_EXTERNAL_HOST_PRIVKEY=/etc/secret-volume/router.pem ROUTER_EXTERNAL_HOST_USERNAME= ROUTER_EXTERNAL_HOST_VXLAN_GW_CIDR= ROUTER_SERVICE_HTTPS_PORT=443 ROUTER_SERVICE_HTTP_PORT=80 ROUTER_SERVICE_NAME=router ROUTER_SERVICE_NAMESPACE=default ROUTER_SUBDOMAIN= STATS_PASSWORD=XXXXXX STATS_PORT=1936 STATS_USERNAME=admin ● Check the router environment variables to find connection parameters for the HAProxy process running inside the pod
Kubernetes: Router-HAProxyKubernetes: Router-HAProxy
Kubernetes: Persistent StorageKubernetes: Persistent Storage Kubernetes Cluster Node Storage Pod Volume Node Pod Pod For Ops: • Google • AWS EBS • OpenStack's Cinder • Ceph • GlusterFS • NFS • iSCSI • FibreChannel • EmptyDir for Dev: • “Claim” kind: PersistentVolume metadata: name: pv0003 spec: capacity: storage: 8Gi accessModes: - ReadWriteOnce nfs: path: /tmp server: 172.17.0.2 kind: PersistentVolumeClaim metadata: name: myclaim spec: accessModes: - ReadWriteOnce resources: requests: storage: 8Gi
Kubernetes: Persistent StorageKubernetes: Persistent Storage ● Kubernetes provides a framework for managing external persistent storage for containers. Kubernetes recognizes a PersistentVolume resource, which defines local or network storage. A pod resource can reference a PersistentVolumeClaim resource in order to access a certain storage size from a PersistentVolume. ● Kubernetes also specifies if a PersistentVolume resource can be shared between pods or if each pod needs its own PersistentVolume with exclusive access. When a pod moves to another node, it keeps connected to the same PersistentVolumeClaim and PersistentVolume instances. So a pod's persistent storage data follows it, regardless of the node where it is scheduled to run.
Kubernetes: Persisten VolumeKubernetes: Persisten Volume ClaimClaim Storage Provider(s) Ops Dev Persistent Volume Farm Projects Claim and Mount Project: ABC Project: XYZ 10G SSD 40G pod pod 5G SSD 10G pod pod
Kubernetes: NetworkingKubernetes: Networking • Each Host = 256 IPs • Each POD = 1 IP Programmable Infra: • GCE / GKE • AWS • OpenStack • Nuage Overlay Networks: • Flannel • Weave • OpenShift-SDN • Open vSwitch
Kubernetes: NetworkingKubernetes: Networking ● Docker networking is very simple. Docker creates a virtual kernel bridge and connects each container network interface to it. Docker itself does not provide a way to allow a pod from one host to connect to a pod from another host. Docker also does not provide a way to assign a public fixed IP address to an application so external users can access it. ● Kubernetes provides service and route resources to manage network visibility between pods and from the external world to them. A service load-balances received network requests among its pods, while providing a single internal IP address for all clients of the service (which usually are other pods). Containers and pods do not need to know where other pods are, they just connect to the service. A route provides an external IP to a service, making it externally visible.
Kubernetes: Hosting PlatformKubernetes: Hosting Platform Kubernetes Cluster Master Node Node Storage Pod Volume Node Service Pod Pod • Scheduling • Lifecycle and health • Discovery • Monitoring • Auth{n,z} • Scaling etcd SkyDNS Replication Controller APIDev/Ops Router Policies Registry Image Visitor Logging ELK
Kubernetes: High AvaibilityKubernetes: High Avaibility ● High Availability (HA) on an Kubernetes/OpenShift Container Platform cluster has two distinct aspects: HA for the OCP infrastructure itself, that is, the masters, and HA for the applications running inside the OCP cluster. ● For applications, or "pods", OCP handles this by default. If a pod is lost, for any reason, Kubernetes schedules another copy, connects it to the service layer and to the persistent storage. If an entire Node is lost, Kubernetes schedules replacements for all its pods, and eventually all applications will be available again. The applications inside the pods are responsible for their own state, so they need to be HA by themselves, if they are stateful, employing proven techniques such as HTTP session replication or database replication.
Authentication MethodsAuthentication Methods ● Authentication is based on OAuth , which provides a standard HTTP-based API for authenticating both interactive and non- interactive clients. – HTTP Basic, to delegate to external Single Sign-On (SSO) systems – GitHub and GitLab, to use GitHub and GitLab accounts – OpenID Connect, to use OpenID-compatible SSO and Google Accounts – OpenStack Keystone v3 server – LDAP v3 server
Kubernetes: AuthorizationKubernetes: Authorization policiespolicies ● There are two levels of authorization policies: – Cluster policy: Controls who has various access levels to Kubernetes / OpenShift Container Platform and all projects. Roles that exist in the cluster policy are considered cluster roles. – Local policy: Controls which users have access to their projects. Roles that exist in a local policy are considered local roles. ● Authorization is managed using the following: – Rules: Sets of permitted verbs on a set of resources; for example, whether someone can delete projects. – Roles: Collections of rules. Users and groups can be bound to multiple roles at the same time. – Binding: Associations between users and/or groups with a role.
OpenShift as a DevelopmentOpenShift as a Development PlatformPlatform Project spacesProject spaces Build toolsBuild tools Integration with your IDEIntegration with your IDE
We Need more than justWe Need more than just OrchestrationOrchestration Self Service -Templates - Web Console Multi-Language Automation - Deploy - Build DevOps Collaboration Secure - Namespaced - RBAC Scalable - Integrated LB Open Source Enterprise - Authentication - Web Console - Central Logging This past week at KubeCon 2016, Red Hat CTO Chris Wright (@kernelcdub) gave a keynote entitled OpenShift is Enterprise-Ready Kubernetes. There it was for the 1200 people in attendance: OpenShift is 100% Kubernetes, plus all the things that you’ll need to run it in production environments. - https://blog.openshift.com/enterprise-ready-kubernetes/
OpenShift is Red Hat ContainerOpenShift is Red Hat Container Application Platform (PaaS)Application Platform (PaaS) Self Service -Templates - Web Console Multi-Language Automation - Deploy - Build DevOps Collaboration Secure - Namespaced - RBAC Scalable - Integrated LB Open Source Enterprise - Authentication - Web Console - Central Logging https://blog.openshift.com/red-hat-chose-kubernetes-openshift/ https://blog.openshift.com/chose-not-join-cloud-foundry-foundation- recommendations-2015/
OpenShift=Enterprise K8sOpenShift=Enterprise K8s
OpenShift Software StackOpenShift Software Stack
OpenShift TechnologyOpenShift Technology Basic container infrastructure is shown, integrated and enhanced by Red Hat – The base OS is RHEL/CentOS/Fedora. – Docker provides the basic container management API and the container image file format. – Kubernetes is an open source project aimed at managing a cluster of hosts (physical or virtual) running containers. It works with templates that describe multicontainer applications composed of multiple resources, and how they interconnect. If Docker is the "core" of OCP, Kubernetes is the "heart" that keeps it moving. – Etcd is a distributed key-value store, used by Kubernetes to store configuration and state information about the containers and other resources inside the OCP cluster.
Kubernetes EmbeddedKubernetes Embedded https://master:8443/api = Kubernetes API /oapi = OpenShift API /console = OpenShift WebConsole OpenShift: • 1 Binary for Master • 1 Binary for Node • 1 Binary for Client • Docker-image • Vagrant-image Kubernetes: • ApiServer, Controller, Scheduler, Etcd • KubeProxy, Kubelet • Kubectl
Project NamespaceProject Namespace ProjectProject • Sandboxed Environment • Network VXLan • Authorization Policies • Resource Quotas • Ops in Control, Dev Freedom oc new-project Project-Dev oc policy add-role-to-user admin scientist1 oc new-app --source=https://gitlab/MyJavaApp --docker-image=jboss-eap Project “Prod” Project “Dev” Project Global Services OpenShift Platform APP A Image APP C Image AppApp • Images run in Containers • Grouped together as a Service • Defined as Template
CI/CD FlowCI/CD Flow Artifact Repository SCM DEVELOPER OPS QA MANAGER RELEASE MANAGER JENKINS APP TRIGGERAND BUILD PULL IMAGE PULL PULL IMAGE PULL IMAGE Project: DEV Project: UAT Project: PROD IMAGE REGISTRY PULLARTIFACT BUILD IMAGE APP BUILD PROMOTE PROMOTE IMAGE REGISTRY APP
OpenShift Build & DeployOpenShift Build & Deploy ArchitectureArchitecture OpenShift Cluster Master Node Storage Pod Volume Node Service Pod Pod etcd SkyDNS Replication Controller APIDev/Ops Router Deploy Build Policies config kind: "BuildConfig“ metadata: name: “myApp-build“ spec: source: type: "Git“ git: uri: "git://gitlab/project/hello.git“ dockerfile: “jboss-eap-6“ strategy: type: "Source“ sourceStrategy: from: kind: "Image“ name: “jboss-eap-6:latest“ output: to: kind: “Image“ name: “myApp:latest“ triggers: - type: "GitHub“ github: secret: "secret101“ - type: "ImageChange“ # oc start-build myApp-build Registry Image VisitorLogging EFK
Building ImagesBuilding Images ● OpenShift/Kubernetes can build a pod from three different sources – A container image: The first source leverages the Docker container ecosystem. Many vendors package their applications as container images, and a pod can be created to run those application images inside OpenShift – A Dockerfile: The second source also leverages the Docker container ecosystem. A Dockerfile is the Docker community standard way of specifying a script to build a container image from Linux OS distribution tools. – Application source code (Source-to-Image or S2I): The third source, S2I, empowers a developer to build container images for an application without dealing with or knowing about Docker internals, image registries, and Dockerfiles
Build & Deploy an ImageBuild & Deploy an Image Code Deploy Build Can configure different deployment strategies like A/B, Rolling upgrade, Automated base updates, and more. Can configure triggers for automated deployments, builds, and more. Source 2 Image Builder Image Developer SCM Container Image Builder Images • Jboss-EAP • PHP • Python • Ruby • Jenkins • Customer • C++ / Go • S2I (bash) scripts Triggers • Image Change (tagging) • Code Change (webhook) • Config Change
OpenShit Build & DeployOpenShit Build & Deploy ArchitectureArchitecture OpenShift Cluster Master Node Storage Pod Volume Node Service Pod Pod etcd SkyDNS Replication Controller APIDev/Ops Router Deploy Build Policies kind: “DeploymentConfig“ metadata: name: “myApp“ spec: replicas: 2 selector: app: nginx template: metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80 triggers: - type: "ImageChange“ from: kind: “Image” name: “nginx:latest # oc deploy myApp --latest Registry Image VisitorLogging EFK
Pop QuizPop Quiz ● What is a valid source for building a pod in OpenShift or Kubernetes (Choose three)? A)Java, Node.js, PHP, and Ruby source code B)RPM packages C)Container images in Docker format D)XML files describing the pod metadata E) Makefiles describing how to build an application F) Dockerfiles Answers the question and Win Merchandize
Continous Integration PipelineContinous Integration Pipeline ExampleExample Source Build Deploy :test :test Deploy :test-fw Test Tag :uat Deploy :uat commit webhook registry ImageChange registry ImageChange Approve Tag :prod Deploy :prod registry ImageChange ITIL container
Monitoring & Inventory:Monitoring & Inventory: CloudFormCloudForm
CloudForm ManagementCloudForm Management
CloudForm Management
Openshift as a tool forOpenshift as a tool for developersdevelopers ● Facilitate deployment and operation of web applications:Facilitate deployment and operation of web applications: ● Getting started with a web application/prototypeGetting started with a web application/prototype ● Automate application deployment, rollback changesAutomate application deployment, rollback changes ● No need to maintain a VM and its OSNo need to maintain a VM and its OS ● Switch hosting platform (container portability)Switch hosting platform (container portability) ● Good integration with code hosting (GitLab)Good integration with code hosting (GitLab) ● CI/CD pipelines (GitLab/Jenkins)CI/CD pipelines (GitLab/Jenkins) ● GitLab Review appsGitLab Review apps
Openshift: Jenkins CI exampleOpenshift: Jenkins CI example
BlueOcean...BlueOcean...
Q & AQ & A Any Question?Any Question? Lets continue toLets continue to Openshift Lab..Openshift Lab..
Installing OpenShift OriginInstalling OpenShift Origin Preparing OSPreparing OS All-In-One OpenShiftAll-In-One OpenShift Post-InstallationPost-Installation
Installing Openshift OriginInstalling Openshift Origin OpenShift: Make sure Docker installed and configuredOpenShift: Make sure Docker installed and configured Skip this step, if all requirment already matchSkip this step, if all requirment already match [root@docker-host ~]# yum install docker [root@docker-host ~]# sed -i '/OPTIONS=.*/cOPTIONS="--selinux-enabled --insecure- registry 172.30.0.0/16"' /etc/sysconfig/docker [root@docker-host ~]# systemctl is-active docker [root@docker-host ~]# systemctl enable docker [root@docker-host ~]# systemctl start docker ● When you use Latest version docker, please configure this [root@docker-host ~]# vim /usr/lib/systemd/system/docker.service Edit ExecStart=/usr/bin/dockerd to ExecStart=/usr/bin/dockerd --insecure-registry 172.30.0.0/16 --insecure-registry 192.168.1.0/24 [root@docker-host ~]# systemctl daemon-reload ; systemctl restart docker ● Command installing docker from Centos Distribution (not latest)
Installing Openshift OriginInstalling Openshift Origin ● Setting hostname at /etc/hosts file, for example: ip-address domain-name.tld [root@docker-host ~]# cat /etc/hosts | grep docker 10.7.60.26 docker-host ● Enable Centos Openshift origin repo [root@docker-host ~]# yum install centos-release-openshift-origin ● Installing Openshift Origin and Origin client [root@docker-host ~]# yum install wget git net-tools bind-utils iptables-services bridge-utils bash-completion origin-clients origin Skip all step above, if all requirment already matchSkip all step above, if all requirment already match
Installing Openshift OriginInstalling Openshift Origin OpenShift: Setting UpOpenShift: Setting Up ● Pick One, don't do all four – oc cluster up – Running in a Docker Container – Running from a rpm – Installer Installation Steps ● Refer to github.com/isnuryusuf/openshift-install/ – File: openshift-origin-quickstart.md
Installing OpenShift – oc clusterInstalling OpenShift – oc cluster upup[root@docker-host ~]# oc cluster up --public-hostname=192.168.1.178 -- Checking OpenShift client ... OK -- Checking Docker client ... OK -- Checking Docker version ... OK -- Checking for existing OpenShift container ... OK -- Checking for openshift/origin:v1.5.1 image ... Pulling image openshift/origin:v1.5.1 Pulled 0/3 layers, 3% complete Pulled 0/3 layers, 19% complete Pulled 0/3 layers, 35% complete Pulled 0/3 layers, 52% complete Pulled 1/3 layers, 60% complete Pulled 1/3 layers, 75% complete Pulled 1/3 layers, 90% complete Pulled 2/3 layers, 97% complete Pulled 3/3 layers, 100% complete Extracting Image pull complete -- Checking Docker daemon configuration ... OK -- Checking for available ports ... OK -- Checking type of volume mount ... Using nsenter mounter for OpenShift volumes -- Creating host directories ... OK -- Finding server IP ... Using 10.7.60.26 as the server IP -- Starting OpenShift container ... Creating initial OpenShift configuration Starting OpenShift using container 'origin' Waiting for API server to start listening OpenShift server started -- Adding default OAuthClient redirect URIs ... OK -- Installing registry ... OK -- Installing router ... OK -- Importing image streams ... OK -- Importing templates ... OK -- Login to server ... OK -- Creating initial project "myproject" ... OK -- Removing temporary directory ... OK
Installing OpenShift – firewallInstalling OpenShift – firewall " ... OK -- Removing temporary directory ... OK-- Checking container networking ... FAIL Error: containers cannot communicate with the OpenShift master Details: The cluster was started. However, the container networking test failed. Solution: Ensure that access to ports tcp/8443 and udp/53 is allowed on 10.7.60.26. You may need to open these ports on your machine's firewall. Caused By: Error: Docker run error rc=1 Details: Image: openshift/origin:v1.5.1 Entrypoint: [/bin/bash] Command: [-c echo 'Testing connectivity to master API' && curl -s -S -k https://10.7.60.26:8443 && echo 'Testing connectivity to master DNS server' && for i in {1..10}; do if curl -s -S -k https://kubernetes.default.svc.cluster.local; then exit 0; fi; sleep 1; done; exit 1] Output: Testing connectivity to master API Error Output: curl: (7) Failed connect to 10.7.60.26:8443; No route to host ● If you got error when running oc cluster up [root@docker-host ~]# oc cluster down [root@docker-host ~]# iptables -I INPUT 1 -p tcp --dport 8443 -j ACCEPT [root@docker-host ~]# iptables -I INPUT 1 -p udp --dport 53 -j ACCEPT [root@docker-host ~]# iptables -I INPUT 1 -p tcp --dport 53 -j ACCEPT [root@docker-host ~]# iptables -I INPUT 1 -p tcp --dport 443 -j ACCEPT [root@docker-host ~]# iptables -I INPUT 1 -p tcp --dport 80 -j ACCEPT [root@docker-host ~]# oc cluster up ● Running following command
Installing OpenShift OriginInstalling OpenShift Origin -- Server Information ... OpenShift server started. The server is accessible via web console at: https://10.7.60.26:8443 You are logged in as: User: developer Password: developer To login as administrator: oc login -u system:admin ● Installation is success, take note on Server URL [root@docker-host ~]# oc login -u system:admin Logged into "https://10.7.60.26:8443" as "system:admin" using existing credentials. You have access to the following projects and can switch between them with 'oc project <projectname>': default kube-system * myproject openshift openshift-infra Using project "myproject". ● Test login from Command line using oc command
Installing OpenShift OriginInstalling OpenShift Origin -- Server Information ... OpenShift server started. The server is accessible via web console at: https://10.7.60.26:8443 You are logged in as: User: developer Password: developer To login as administrator: oc login -u system:admin
Installing OpenShift OriginInstalling OpenShift Origin -- Server Information ... OpenShift server started. The server is accessible via web console at: https://10.7.60.26:8443 You are logged in as: User: developer Password: developer To login as administrator: oc login -u system:admin
Creating projectCreating project OpenShift: Test create new user and projectOpenShift: Test create new user and project [root@docker-host ~]# oc login Authentication required for https://10.7.60.26:8443 (openshift) Username: test Password: test Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> [root@docker-host ~]# oc new-project test-project Now using project "test-project" on server "https://10.7.60.26:8443". You can add applications to this project with the 'new-app' command. For example, try: oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git to build a new example application in Ruby.
Origin 1Origin 1stst App DeploymentApp Deployment OpenShift: Test create new app deploymentOpenShift: Test create new app deployment [root@docker-host ~]# oc new-app openshift/deployment-example --> Found Docker image 1c839d8 (23 months old) from Docker Hub for "openshift/deployment-example" * An image stream will be created as "deployment-example:latest" that will track this image * This image will be deployed in deployment config "deployment-example" * Port 8080/tcp will be load balanced by service "deployment-example" * Other containers can access this service through the hostname "deployment- example" * WARNING: Image "openshift/deployment-example" runs as the 'root' user which may not be permitted by your cluster administrator --> Creating resources ... imagestream "deployment-example" created deploymentconfig "deployment-example" created service "deployment-example" created --> Success Run 'oc status' to view your app. OpenShift: Monitor you deploymentOpenShift: Monitor you deployment [root@docker-host ~]# oc status In project test-project on server https://10.7.60.26:8443 svc/deployment-example - 172.30.96.17:8080 dc/deployment-example deploys istag/deployment-example:latest deployment #1 deployed about a minute ago - 1 pod View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.
Origin 1Origin 1stst App DeploymentApp Deployment OpenShift: List all resources from 1OpenShift: List all resources from 1stst deploymentdeployment [root@docker-host ~]# oc get all NAME DOCKER REPO TAGS UPDATED is/deployment-example 172.30.1.1:5000/test-project/deployment-example latest 2 minutes ago NAME REVISION DESIRED CURRENT TRIGGERED BY dc/deployment-example 1 1 1 config,image(deployment- example:latest) NAME DESIRED CURRENT READY AGE rc/deployment-example-1 1 1 1 2m NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/deployment-example 172.30.96.17 <none> 8080/TCP 2m NAME READY STATUS RESTARTS AGE po/deployment-example-1-jxctr 1/1 Running 0 2m [root@docker-host ~]# oc get pod NAME READY STATUS RESTARTS AGE deployment-example-1-jxctr 1/1 Running 0 3m [root@docker-host ~]# oc get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE deployment-example 172.30.96.17 <none> 8080/TCP 3m
Origin 1Origin 1stst App DeploymentApp Deployment OpenShift: Accessing 1OpenShift: Accessing 1stst deployment from Web Guideployment from Web Gui Login to Openshift Origin Webconsole using user/pass test/testLogin to Openshift Origin Webconsole using user/pass test/test Choose and click test-projectChoose and click test-project
Origin 1Origin 1stst App DeploymentApp Deployment OpenShift: Web Console show 1OpenShift: Web Console show 1stst deployment appdeployment app
Origin 1Origin 1stst App DeploymentApp Deployment OpenShift: from docker ps output, you can see the new dockerOpenShift: from docker ps output, you can see the new docker container is runningcontainer is running [root@docker-host ~]# docker ps | grep deployment-example be02326a13be openshift/deployment-example@sha256:ea913--dipotong--ecf421f99 "/deployment v1" 15 minutes ago Up 15 minutes k8s_deployment-example.92c6c479_deployment-example-1-jxctr_test-project_d2549bbf-4c6d- 11e7-9946-080027b2e552_6eb2de05 9989834d0c74 openshift/origin-pod:v1.5.1 "/pod" 15 minutes ago Up 15 minutes k8s_POD.bc05fe90_deployment-example-1-jxctr_test- project_d2549bbf-4c6d-11e7-9946-080027b2e552_55ba483b
Origin 1Origin 1stst App DeploymentApp Deployment OpenShift: Test your 1OpenShift: Test your 1stst App using curl and from insideApp using curl and from inside container hostcontainer host [root@docker-host ~]# oc status In project test-project on server https://10.7.60.26:8443 svc/deployment-example - 172.30.96.17:8080 dc/deployment-example deploys istag/deployment-example:latest deployment #1 deployed 23 minutes ago - 1 pod View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'. [root@docker-host ~]# curl 172.30.96.17:8080 <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Deployment Demonstration</title> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <style> HTML{height:100%;} BODY{font-family:Helvetica,Arial;display:flex;display:-webkit-flex;align- items:center;justify-content:center;-webkit-align-items:center;-webkit-box- align:center;-webkit-justify-content:center;height:100%;} .box{background:#006e9c;color:white;text-align:center;border- radius:10px;display:inline-block;} H1{font-size:10em;line-height:1.5em;margin:0 0.5em;} H2{margin-top:0;} </style> </head> <body> <div class="box"><h1>v1</h1><h2></h2></div> </body>
OriginOrigin 2nd2nd App DeploymentApp Deployment Lets continue to 2Lets continue to 2ndnd App deployment, but for now, we usingApp deployment, but for now, we using Web ConsoleWeb Console 1)1) Klik “Add to Project” on Web ConsoleKlik “Add to Project” on Web Console 2)2) Choose “Import YAML / JSON”Choose “Import YAML / JSON” 3)3) Run following command:Run following command: [root@docker-host ~]# oc new-app https://github.com/openshift/ruby-hello-world -o yaml > myapp.yaml [root@docker-host ~]# cat myapp.yaml apiVersion: v1 items: - apiVersion: v1 kind: ImageStream metadata: annotations: openshift.io/generated-by: OpenShiftNewApp creationTimestamp: null labels: app: ruby-hello-world name: ruby-22-centos7 -------DIPOTONG-------------
OriginOrigin 2nd2nd App DeploymentApp Deployment 1)1) Copy and Paste all output from “cat myapp.yaml” to WebCopy and Paste all output from “cat myapp.yaml” to Web Console and click “Create” butonConsole and click “Create” buton
OriginOrigin 2nd2nd App DeploymentApp Deployment Check your deployment process from Web ConsoleCheck your deployment process from Web Console
OriginOrigin 2nd2nd App DeploymentApp Deployment Check your deployment process from Web ConsoleCheck your deployment process from Web Console
Expose your AppsExpose your Apps Expose your deployment using domain nameExpose your deployment using domain name
Expose your AppsExpose your Apps Expose your deployment using domain nameExpose your deployment using domain name
Expose your AppsExpose your Apps Expose your deployment using domain nameExpose your deployment using domain name
Expose your AppsExpose your Apps Expose your deployment using domain nameExpose your deployment using domain name
Expose your AppsExpose your Apps Expose your deployment using domain nameExpose your deployment using domain name
Expose your AppsExpose your Apps Configure static DNS file /etc/hosts on you Laptop and allowConfigure static DNS file /etc/hosts on you Laptop and allow port 80, 443 from docker-host firewallport 80, 443 from docker-host firewall root@nobody:/media/yusuf/OS/KSEI# cat /etc/hosts | grep xip.io 10.7.60.26 deployment-example-test-project.10.7.60.26.xip.io [root@docker-host ~]# iptables -I INPUT 1 -p tcp --dport 80 -j ACCEPT [root@docker-host ~]# iptables -I INPUT 1 -p tcp --dport 443 -j ACCEPT Accessing your application from Browser LaptopAccessing your application from Browser Laptop http://deployment-example-test-project.10.7.60.26.xip.io/http://deployment-example-test-project.10.7.60.26.xip.io/
Expose your AppsExpose your Apps Expose your second Apps using command lineExpose your second Apps using command line [root@docker-host ~]# oc expose service ruby-hello-world route "ruby-hello-world" exposed [root@docker-host ~]# oc get routes | grep ruby ruby-hello-world ruby-hello-world-test-project.10.7.60.26.xip.io ruby- hello-world 8080-tcp None
Origin 3Origin 3rdrd App DeploymentApp Deployment The next way to build openshift apps is using source-to-The next way to build openshift apps is using source-to- image (s2i) here is the stepimage (s2i) here is the step 1)1) Clik “Add to project”Clik “Add to project” 2)2) Choose the Laguage, for example: PHPChoose the Laguage, for example: PHP 3)3) Use latest PHP Version: 7.0-latest and click selectUse latest PHP Version: 7.0-latest and click select 4)4) Input you App Name, for example: my-php-appInput you App Name, for example: my-php-app 5)5) On GIT Repo URL, input:On GIT Repo URL, input: https://github.com/OpenShiftDemos/os-sample-phphttps://github.com/OpenShiftDemos/os-sample-php 6)6) And click Create to start deploymentAnd click Create to start deployment
Origin 3Origin 3rdrd App DeploymentApp Deployment
Origin 3Origin 3rdrd App DeploymentApp Deployment
Origin 3Origin 3rdrd App DeploymentApp Deployment
Origin 3Origin 3rdrd App DeploymentApp Deployment https://blog.shameerc.com/2016/08/my-docker-setup-ubuntu-php7-fpm-nginx-and-mariadbhttps://blog.shameerc.com/2016/08/my-docker-setup-ubuntu-php7-fpm-nginx-and-mariadb
OpenShift Others LabOpenShift Others Lab - Image Stream Template- Image Stream Template - Pipeline for CI/CD- Pipeline for CI/CD
Install the default image-Install the default image- streamsstreamsEnabling Image stream using ansibleEnabling Image stream using ansible [root@docker-host]# mkdir /SOURCE ; cd /SOURCE [root@docker-host SOURCE]# git clone https://github.com/openshift/openshift-ansible.git Cloning into 'openshift-ansible'... remote: Counting objects: 53839, done. remote: Compressing objects: 100% (47/47), done. remote: Total 53839 (delta 26), reused 43 (delta 11), pack-reused 53775 Receiving objects: 100% (53839/53839), 14.12 MiB | 930.00 KiB/s, done. Resolving deltas: 100% (32741/32741), done. [root@docker-host SOURCE]# cd openshift- ansible/roles/openshift_examples/files/examples/latest/ [root@docker-host latest]# oc login -u system:admin -n default [root@docker-host latest]# oadm policy add-cluster-role-to-user cluster-admin admin [root@docker-host latest]# for f in image-streams/image-streams-centos7.json; do cat $f | oc create -n openshift -f -; done [root@docker-host latest]# for f in db-templates/*.json; do cat $f | oc create -n openshift -f -; done [root@docker-host latest]# for f in quickstart-templates/*.json; do cat $f | oc create -n openshift -f -; done
Install Jenkins PersistentInstall Jenkins Persistent Install Jenkins using Image stream from ansibleInstall Jenkins using Image stream from ansible 1)1) Click “Add to project”Click “Add to project” 2)2) On Browse catalog, search for jenkins and select “JenkinsOn Browse catalog, search for jenkins and select “Jenkins Persistent”Persistent” 3)3) Leave everything default and click “Create”Leave everything default and click “Create” Ref:Ref: https://github.com/openshift/origin/blob/master/examples/jenkins/README.mdhttps://github.com/openshift/origin/blob/master/examples/jenkins/README.md Install Jenkins using Image stream from ansibleInstall Jenkins using Image stream from ansible 1)1) Click “Add to project”Click “Add to project” 2)2) On Browse catalog, search for jenkins and select “JenkinsOn Browse catalog, search for jenkins and select “Jenkins Persistent”Persistent” 3)3) Leave everything default and click “Create”Leave everything default and click “Create” Ref:Ref: https://github.com/openshift/origin/blob/master/examples/jenkins/README.mdhttps://github.com/openshift/origin/blob/master/examples/jenkins/README.md
Install Jenkins PersistentInstall Jenkins Persistent
CI/CD Demo – OpenShift OriginCI/CD Demo – OpenShift Origin This repository includes the infrastructure and pipelineThis repository includes the infrastructure and pipeline definition for continuous delivery using Jenkins, Nexus anddefinition for continuous delivery using Jenkins, Nexus and SonarQube on OpenShift. On every pipeline execution, theSonarQube on OpenShift. On every pipeline execution, the code goes through the following steps:code goes through the following steps: 1)1) Code is cloned from Gogs, built, tested and analyzed forCode is cloned from Gogs, built, tested and analyzed for bugs and bad patternsbugs and bad patterns 2)2) The WAR artifact is pushed to Nexus Repository managerThe WAR artifact is pushed to Nexus Repository manager 3)3) A Docker image (tasks:latest) is built based on the TasksA Docker image (tasks:latest) is built based on the Tasks application WAR artifact deployed on JBoss EAP 6application WAR artifact deployed on JBoss EAP 6 4)4) The Tasks Docker image is deployed in a fresh new containerThe Tasks Docker image is deployed in a fresh new container in DEV projectin DEV project 5)5) If tests successful, the DEV image is tagged with theIf tests successful, the DEV image is tagged with the application version (tasks:7.x) in the STAGE projectapplication version (tasks:7.x) in the STAGE project 6)6) The staged image is deployed in a fresh new container in theThe staged image is deployed in a fresh new container in the STAGE projectSTAGE project
CI/CD Demo – OpenShift OriginCI/CD Demo – OpenShift Origin The following diagram shows the steps included in theThe following diagram shows the steps included in the deployment pipeline:deployment pipeline:
CI/CD Demo – OpenShift OriginCI/CD Demo – OpenShift Origin Follow these instructions in order to create a local OpenShiftFollow these instructions in order to create a local OpenShift cluster. Otherwise using your current OpenShift cluster,cluster. Otherwise using your current OpenShift cluster, create the following projects for CI/CD components, Dev andcreate the following projects for CI/CD components, Dev and Stage environments:Stage environments: 1)1) # oc new-project dev --display-name="Tasks - Dev"# oc new-project dev --display-name="Tasks - Dev" 2)2) # oc new-project stage --display-name="Tasks - Stage"# oc new-project stage --display-name="Tasks - Stage" 3)3) # oc new-project cicd --display-name="CI/CD"# oc new-project cicd --display-name="CI/CD"
CI/CD Demo – OpenShift OriginCI/CD Demo – OpenShift Origin Follow these instructions in order to create a local OpenShift cluster.Follow these instructions in order to create a local OpenShift cluster. Otherwise using your current OpenShift cluster, create the followingOtherwise using your current OpenShift cluster, create the following projects for CI/CD components, Dev and Stage environments:projects for CI/CD components, Dev and Stage environments: 1)1) # oc new-project dev --display-name="Tasks - Dev"# oc new-project dev --display-name="Tasks - Dev" 2)2) # oc new-project stage --display-name="Tasks - Stage"# oc new-project stage --display-name="Tasks - Stage" 3)3) # oc new-project cicd --display-name="CI/CD"# oc new-project cicd --display-name="CI/CD" Jenkins needs to access OpenShift API to discover slave images as wellJenkins needs to access OpenShift API to discover slave images as well accessing container images. Grant Jenkins service account enoughaccessing container images. Grant Jenkins service account enough privileges to invoke OpenShift API for the created projects:privileges to invoke OpenShift API for the created projects: 1)1) # oc policy add-role-to-user edit system:serviceaccount:cicd:jenkins -n# oc policy add-role-to-user edit system:serviceaccount:cicd:jenkins -n devdev 2)2) # oc policy add-role-to-user edit system:serviceaccount:cicd:jenkins -n# oc policy add-role-to-user edit system:serviceaccount:cicd:jenkins -n stagestage
CI/CD Demo – OpenShift OriginCI/CD Demo – OpenShift Origin 1)1) Create the CI/CD components based on the provided templateCreate the CI/CD components based on the provided template oc process -f cicd-template.yaml | oc create -f -oc process -f cicd-template.yaml | oc create -f - 2)2) To use custom project names, change cicd, dev and stage in the aboveTo use custom project names, change cicd, dev and stage in the above commands to your own names and use the following to create the demo:commands to your own names and use the following to create the demo: oc process -f cicd-template.yaml -v DEV_PROJECT=dev-project-name -voc process -f cicd-template.yaml -v DEV_PROJECT=dev-project-name -v STAGE_PROJECT=stage-project-name | oc create -f - -n cicd-project-nameSTAGE_PROJECT=stage-project-name | oc create -f - -n cicd-project-name Note: you need ~8GB memory for running this demo.Note: you need ~8GB memory for running this demo. Ref: https://github.com/isnuryusuf/openshift-cd-demoRef: https://github.com/isnuryusuf/openshift-cd-demo
CI/CD Demo – OpenShift OriginCI/CD Demo – OpenShift Origin
CI/CD Demo – OpenShift OriginCI/CD Demo – OpenShift Origin
CI/CD Demo – OpenShift OriginCI/CD Demo – OpenShift Origin
CI/CD Demo – OpenShift OriginCI/CD Demo – OpenShift Origin 1)1) Create the CI/CD components based on the provided templateCreate the CI/CD components based on the provided template oc process -f cicd-template.yaml | oc create -f -oc process -f cicd-template.yaml | oc create -f - 2)2) To use custom project names, change cicd, dev and stage in the aboveTo use custom project names, change cicd, dev and stage in the above commands to your own names and use the following to create the demo:commands to your own names and use the following to create the demo: oc process -f cicd-template.yaml -v DEV_PROJECT=dev-project-name -voc process -f cicd-template.yaml -v DEV_PROJECT=dev-project-name -v STAGE_PROJECT=stage-project-name | oc create -f - -n cicd-project-nameSTAGE_PROJECT=stage-project-name | oc create -f - -n cicd-project-name Note: you need ~8GB memory for running this demo.Note: you need ~8GB memory for running this demo. Ref: https://github.com/isnuryusuf/openshift-cd-demoRef: https://github.com/isnuryusuf/openshift-cd-demo
Thnk you for ComingThnk you for Coming More about me - https://www.linkedin.com/in/yusuf-hadiwinata-sutandar-3017aa41/ - https://www.facebook.com/yusuf.hadiwinata - https://github.com/isnuryusuf/ Join me on: - “Linux administrators” & “CentOS Indonesia Community” Facebook Group - Docker UG Indonesia: https://t.me/dockerid
ReferenceReference • openshiftenterprise3-160414081118.pptx • 2017-01-18_-_RedHat_at_CERN_- _Web_application_hosting_with_Openshift_and_Docker.ppt x • DO280 OpenShift Container Platform Administration I • https://github.com/openshift/origin/
Other Usefull LinkOther Usefull Link • https://ryaneschinger.com/blog/rolling-updates-kubernetes-replication-controllers-vs- deployments/ • https://kubernetes.io/docs/concepts/storage/persistent-volumes/ • http://blog.midokura.com/2016/08/kubernetes-ready-networking-done-midonet-way/ • https://blog.openshift.com/red-hat-chose-kubernetes-openshift/ • https://blog.openshift.com/chose-not-join-cloud-foundry-foundation-recommendations -2015/ • https://kubernetes.io/docs/concepts/workloads/pods/pod/ • https://blog.openshift.com/enterprise-ready-kubernetes/

Journey to the devops automation with docker kubernetes and openshift

  • 1.
    Yusuf Hadiwinata Sutandar LinuxGeek,OpenSourceEnthusiast,SecurityHobbies Journeyto the Devops AutomationJourney to the Devops Automation with Docker, Kuberneteswith Docker, Kubernetes and OpenShiftand OpenShift
  • 2.
    AgendaAgenda ● Container & DockerIntroductionContainer & Docker Introduction ● Installing Docker Container & ManagementInstalling Docker Container & Management ● Managing Docker with PortainerManaging Docker with Portainer ● Managing Docker with Openshift OriginManaging Docker with Openshift Origin ● Build Simple Docker ApplicationBuild Simple Docker Application ● DiscussionDiscussion
  • 3.
    Traditional DevelopmentTraditional Development Inthe world of business, a "silo" is anyIn the world of business, a "silo" is any system within an organization that is closedsystem within an organization that is closed off to other systems. Silos tend to constructoff to other systems. Silos tend to construct themselves inadvertently, as differentthemselves inadvertently, as different managers take on various priorities andmanagers take on various priorities and responsibilities within the organization. Overresponsibilities within the organization. Over time, departments gradually focus more andtime, departments gradually focus more and more inward and pay less attention to whatmore inward and pay less attention to what everyone else is doing.everyone else is doing.
  • 4.
  • 5.
  • 6.
    How DevOps BreaksDown SilosHow DevOps Breaks Down Silos DevOps Help Improve Collaboration BetweenDevOps Help Improve Collaboration Between Developers and IT Operations – DepoymentDevelopers and IT Operations – Depoyment Automation & Self ServicesAutomation & Self Services
  • 7.
    On the otherhand, DevOps culture is anti-siloOn the other hand, DevOps culture is anti-silo by its very nature, while still retaining theby its very nature, while still retaining the subject matter experts that are so crucial tosubject matter experts that are so crucial to the software development process. DevOpsthe software development process. DevOps requires developers, QA testers, operationsrequires developers, QA testers, operations engineers, and product managers to workengineers, and product managers to work closely together from the very beginning,closely together from the very beginning, which means that any existing silos will havewhich means that any existing silos will have to disappear quickly.to disappear quickly. Credit: Amit KumarCredit: Amit Kumar
  • 8.
    DevOps Tools &SoftwareDevOps Tools & Software
  • 9.
    Brief Intro toContainer & DockerBrief Intro to Container & Docker History of ContainerHistory of Container Docker IntroductionDocker Introduction
  • 10.
    The ProblemThe Problem CargoTransport 1960sCargo Transport 1960s
  • 11.
  • 12.
  • 13.
    The SolutionThe Solution 90%of all cargo now shipped in a90% of all cargo now shipped in a standard containerstandard container Order of magnitude reduction in costOrder of magnitude reduction in cost and time to load and unload ships,and time to load and unload ships, trains, truckstrains, trucks
  • 14.
  • 15.
    The App DeploymentProblemThe App Deployment Problem
  • 16.
    The App DeploymentSolutionThe App Deployment Solution
  • 17.
    The App DeploymentSolutionThe App Deployment Solution
  • 18.
    Container TechnologyContainer Technology Oneway of looking at containers is as improved chroot jails. Containers allow an operating system (OS) process (or a process tree) to run isolated from other processes hosted by the same OS. Through the use of Linux kernel namespaces, it is possible to restrict a process view of: – Other processes (including the pid number space) – File systems – User and group IDs – IPC channels – Devices – Networking
  • 19.
    Container TechnologyContainer Technology OtherLinux kernel features complement the process isolation provided by kernel namespaces: – Cgroups limit the use of CPU, RAM, virtual memory, and I/O bandwidth, among other hardware and kernel resources. – Capabilities assign partial administrative faculties; for example, enabling a process to open a low network port (<1024) without allowing it to alter routing tables or change file ownership. – SELinux enforces mandatory access policies even if the code inside the container finds a way to break its isolation
  • 20.
    Container TechnologyContainer Technology ImageBImage A Images & Containers 25 ●Docker “Image” • Unified Packaging format • Like “war” or “tar.gz” • For any type of Application • Portable ●Docker “Container” • Runtime • Isolation Hardware Container APP A Image Host Minimal OS Container APP B Image Container APP C Image Docker Engine Docker Registry RHEL JDK Jboss-EAP Libs A Libs B App A App B docker pull <image>
  • 21.
    Major Advantages ofContainersMajor Advantages of Containers ● Low hardware footprint – Uses OS internal features to create an isolated environment where resources are managed using OS facilities such as namespaces and cgroups. This approach minimizes the amount of CPU and memory overhead compared to a virtual machine hypervisor. Running an application in a VM is a way to create isolation from the running environment, but it requires a heavy layer of services to support the same low hardware footprint isolation provided by containers. ● Environment isolation – Works in a closed environment where changes made to the host OS or other applications do not affect the container. Because the libraries needed by a container are self-contained, the application can run without disruption. For example, each application can exist in its own container with its own set of libraries. An update made to one container does not affect other containers, which might not work with the update.
  • 22.
    Major Advantages ofContainersMajor Advantages of Containers cont..cont..● Quick deployment – Deploys any container quickly because there is no need for a full OS install or restart. Normally, to support the isolation, a new OS installation is required on a physical host or VM, and any simple update might require a full OS restart. A container only requires a restart without stopping any services on the host OS. ● Multiple environment deployment – In a traditional deployment scenario using a single host, any environment differences might potentially break the application. Using containers, however, the differences and incompatibilities are mitigated because the same container image is used. ● Reusability – The same container can be reused by multiple applications without the need to set up a full OS. A database container can be used to create a set of tables for a software application, and it can be quickly destroyed and recreated without the need to run a set of housekeeping tasks. Additionally, the same database container can be used by the production environment to deploy an application.
  • 23.
    Why are ContainersWhyare Containers Lightweight?Lightweight?containers ascontainers as lightweight VMslightweight VMs
  • 24.
    Is not Virtualizaiton:)Is not Virtualizaiton :) Linux Kernel App1 App2 App3 Isolation, not Virtualization • Kernel Namespaces • Process • Network • IPC • Mount • User • Resource Limits • Cgroups • Security • SELinux
  • 25.
    Container SolutionContainer Solution VirtualMachine and Container Complement each otherVirtual Machine and Container Complement each other Containers ● Containers run as isolated processes in user space of host OS ● They share the kernel with other container (container-processes) ● Containers include the application and all of its dependencies ● Not tied to specific infrastructure Virtual Machine ● Virtual machines include the application, the necessary binaries and libraries, and an entire guest operating system ● Each Guest OS has its own Kernel and user space
  • 26.
    Container ProblemContainer Problem Containersbefore DockerContainers before Docker ● No standardized exchange format. (No, a rootfs tarball is not a format!) ● Containers are hard to use for developers. (Where's the equivalent of docker run debian?) ● No re-usable components, APIs, tools. (At best: VM abstractions, e.g. libvirt.) Analogy: ● Shipping containers are not just steel boxes. ● They are steel boxes that are a standard size, with the same hooks and holes
  • 27.
    Docker SolutionDocker Solution Containersafter DockerContainers after Docker ● Standardize the container format, because containers were not portable. ● Make containers easy to use for developers. ● Emphasis on re-usable components, APIs, ecosystem of standard tools. ● Improvement over ad-hoc, in-house, specific tools.
  • 28.
    Docker SolutionDocker Solution DockerArchitectureDocker Architecture Docker is one of the container implementations available for deployment and supported by companies such as Red Hat in their Red Hat Enterprise Linux Atomic Host platform. Docker Hub provides a large set of containers developed by the community.
  • 29.
    Docker SolutionDocker Solution DockerCore ElementsDocker Core Elements ● Images ● Images are read-only templates that contain a runtime environment that includes application libraries and applications. Images are used to create containers. Images can be created, updated, or downloaded for immediate consumption. ● Registries ● Registries store images for public or private use. The well-known public registry is Docker Hub, and it stores multiple images developed by the community, but private registries can be created to support internal image development under a company's discretion. This course runs on a private registry in a virtual machine where all the required images are stored for faster consumption. ● Containers ● Containers are segregated user-space environments for running applications isolated from other applications sharing the same host OS.
  • 30.
    Basics of aDocker System?Basics of a Docker System?
  • 31.
    Ecosystem SupportEcosystem Support ● DevOpsTools ● Integrations with Chef, Puppet, Jenkins, Travis, Salt, Ansible +++ ● Orchestration tools ● Mesos, Heat, ++ ● Shipyard & others purpose built for Docker ● Applications ● 1000’s of Dockerized applications available at index.docker.io
  • 32.
    Ecosystem Support Continue..EcosystemSupport Continue.. ● Operating systems ● Virtually any distribution with a 2.6.32+ kernel ● Red Hat/Docker collaboration to make work across RHEL 6.4+, Fedora, and other members of the family (2.6.32 +) ● CoreOS—Small core OS purpose built with Docker ● OpenStack ● Docker integration into NOVA (& compatibility with Glance, Horizon, etc.) accepted for Havana release ● Private PaaS ● OpenShift, Solum (Rackspace, OpenStack), Other TBA ● Public PaaS ● Deis, Voxoz, Cocaine (Yandex), Baidu PaaS ● Public IaaS ● Native support in Rackspace, Digital Ocean,+++ ● AMI (or equivalent) available for AWS & other
  • 33.
    What IT`s Saidabout Docker:What IT`s Said about Docker: Developer Say: Build Once, Run Anywhere Operator: Configure Once, Run Anything
  • 34.
    Why Developers CareWhyDevelopers Care Developer Say: Build Once, Run Anywhere A clean, safe, hygienic, portable runtime environment for your app. No worries about missing dependencies, packages and other pain points during subsequent deployments. Run each app in its own isolated container, so you can run various versions of libraries and other dependencies for each app without worrying. Automate testing, integration, packaging...anything you can script. Reduce/eliminate concerns about compatibility on different platforms, either your own or your customers. Cheap, zero-penalty containers to deploy services. A VM without the overhead of a VM. Instant replay and reset of image snapshots.
  • 35.
    Why Administrators CareWhyAdministrators Care Operator: Configure Once, Run Anything Make the entire lifecycle more efficient, consistent, and repeatable Increase the quality of code produced by developers. Eliminate inconsistencies between development, test, production, and customer environments. Support segregation of duties. Significantly improves the speed and reliability of continuous deployment and continuous integration systems. Because the containers are so lightweight, address significant performance, costs, deployment, and portability issues normally associated with VMs.
  • 36.
    Managing Docker ContainersManagingDocker Containers ● Installing DockerInstalling Docker ● create/start/stop/remove containerscreate/start/stop/remove containers ● inspect containersinspect containers ● interact, commit new imagesinteract, commit new images
  • 37.
    Lab: Installing Docker- RequirmentLab: Installing Docker - Requirment ● Requirment: ● Centos 7.3 Minimal Install ● Latest update with “yum -y update” ● 16 GB OS Disk ● 16 GB Unpartition Disk for Docker Storage ● 2 GB Ram and 2 vCPU ● Bridge Network for connecting to Internet and Accessing from Host (laptop) ● Snapshoot VM
  • 38.
    Lab: Installing Docker- PreSetupLab: Installing Docker - PreSetup ● Setting hostname at /etc/hosts file and server: [root@docker-host ~]# cat /etc/hosts | grep docker-host 192.168.0.6 docker-host [root@docker-host ~]# hostnamectl set-hostname docker-host [root@docker-host ~]# hostname docker-host ● Install needed packages and Latest Docker [root@docker-host ~]# yum install wget git net-tools bind-utils iptables-services bridge-utils bash-completion [root@docker-host ~]# curl -fsSL https://get.docker.com/ | sh ● Edit /etc/sysconfig/docker file and add --insecure-registry 172.30.0.0/16 to the OPTIONS parameter (installing docker from repo only) [root@docker-host ~]# sed -i '/OPTIONS=.*/cOPTIONS="--selinux-enabled --insecure- registry 172.30.0.0/16"' /etc/sysconfig/docker [root@docker-host ~]# systemctl is-active docker ; systemctl enable docker ; systemctl start docker Pulling your 1st container from internet [root@docker-host ~]# docker container run -ti ubuntu bash
  • 39.
    Lab: Installing Docker- PreSetupLab: Installing Docker - PreSetup ● When you use Latest version docker, please configure this [root@docker-host ~]# vim /usr/lib/systemd/system/docker.service Edit ExecStart=/usr/bin/dockerd to ExecStart=/usr/bin/dockerd --insecure-registry 172.30.0.0/16 --insecure-registry 192.168.1.0/24 [root@docker-host ~]# systemctl daemon-reload ; systemctl restart docker ● Optional Configuration for private registry [root@docker-host ~]# vim /etc/docker/daemon.json Add { "insecure-registries" : ["docker-registry:5000"] } [root@docker-host ~]# systemctl restart docker Pulling your 1st container from private registry [root@docker-host ~]# docker container run -ti docker-registry:5000/ubuntu bash
  • 40.
    Installing Docker –Setting Docker StorageInstalling Docker – Setting Docker Storage ● Setting up a volume group and LVM thin pool on user specified block device [root@docker-host ~]# echo DEVS=/dev/sdb >> /etc/sysconfig/docker-storage-setup [root@docker-host ~]# systemctl restart docker ● By default, docker-storage-setup looks for free space in the root volume group and creates an LVM thin pool. Hence you can leave free space during system installation in the root volume group and starting docker will automatically set up a thin pool and use it. ● LVM thin pool in a user specified volume group [root@docker-host ~]# echo VG=docker-vg >> /etc/sysconfig/docker-storage-setup [root@docker-host ~]# systemctl restart docker ● https://access.redhat.com/documentation/en- us/red_hat_enterprise_linux_atomic_host/7/html/managing_containers/managing_storage_ with_docker_formatted_containers ● Setting up a volume group and LVM thin pool on user specified block device for docker version 1.12
  • 41.
    Lab: 1Lab: 1stst timePlaying w/ Dockertime Playing w/ Docker [root@docker-host ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [root@docker-host ~]# docker run -t centos bash Unable to find image 'centos:latest' locally Trying to pull repository docker.io/library/centos ... sha256:aebf12af704307dfa0079b3babdca8d7e8ff6564696882bcb5d11f1d461f9ee9: Pulling from docker.io/library/centos d5e46245fe40: Pull complete Digest: sha256:aebf12af704307dfa0079b3babdca8d7e8ff6564696882bcb5d11f1d461f9ee9 Status: Downloaded newer image for docker.io/centos:latest [root@docker-host ~]# docker images --all REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/centos latest 3bee3060bfc8 46 hours ago 192.5 MB [root@docker-host ~]# docker exec -it 60fec4b9a9bf bash [root@60fec4b9a9bf /]# ps ax PID TTY STAT TIME COMMAND 1 ? Ss+ 0:00 bash 29 ? Ss 0:00 bash 42 ? R+ 0:00 ps ax
  • 42.
    Docker Management CommandsDockerManagement Commands Command Description docker create image [ command ] docker run image [ command ] create the container = create + start docker start container. . . docker stop container. . . docker kill container. . . docker restart container. . . start the container graceful 2 stop kill (SIGKILL) the container = stop + start docker pause container. . . docker unpause container. . . suspend the container resume the container docker rm [ -f 3 ] container. . . destroy the container
  • 43.
    docker run -Run a containerdocker run - Run a container ● create a container and start it ● the container filesystem is initialised from image image ● arg0..argN is the command run inside the container (as PID 1) docker run [ options ] image [ arg0 arg1...] [root@docker-host ~]# docker run centos /bin/hostname f0d0720bd373 [root@docker-host ~]# docker run centos date +%H:%M:%S 17:10:13 [root@docker-host ~]# docker run centos true ; echo $? 0 [root@docker-host ~]# docker run centos false ; echo $? 1
  • 44.
    docker run -Foreground mode vs. Detacheddocker run - Foreground mode vs. Detached modemode ● Foreground mode is the default ● stdout and stderr are redirected to the terminal ● docker run propagates the exit code of the main process ● With -d, the container is run in detached mode: ● displays the ID of the container ● returns immediately [root@docker-host ~]# docker run centos date Wed Jun 7 15:35:48 UTC 2017 [root@docker-host ~]# docker run -d centos date 48b66ad5fc30c468ca0b28ff83dfec0d6e001a2f53e3d168bca754ea76d2bc04 [root@docker-host ~]# docker logs 48b66a Tue Jan 20 17:32:16 UTC 2015
  • 45.
    docker run -interactive modedocker run - interactive mode ● By default containers are non-interactive ● stdin is closed immediately ● terminal signals are not forwarded 5 $ docker run -t debian bash root@6fecc2e8ab22:/# date ˆC $ ● With -i the container runs interactively ● stdin is usable ● terminal signals are forwarded to the container $ docker run -t -i debian bash root@78ff08f46cdb:/# date Tue Jan 20 17:52:01 UTC 2015 root@78ff08f46cdb:/# ˆC root@78ff08f46cdb:/#
  • 46.
    docker run -Set the container namedocker run - Set the container name ● --name option, assigns a name for the container (by default a random name is generated adjective name)→ [root@docker-host ~]# docker run -d -t debian da005df0d3aca345323e373e1239216434c05d01699b048c5ff277dd691ad535 [root@docker-host ~]# docker run -d -t --name blahblah debian 0bd3cb464ff68eaf9fc43f0241911eb207fefd9c1341a0850e8804b7445ccd21 [root@docker-host ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED .. NAMES 0bd3cb464ff6 debian:7.5 "/bin/bash" 6 seconds ago blahblah Da005df0d3ac debian:7.5 "/bin/bash" About a minute ago focused_raman [root@docker-host ~]# docker stop blahblah focused_raman ● Note: Names must be unique [root@docker-host ~]# docker run --name blahblah debian true 2015/01/20 19:31:21 Error response from daemon: Conflict, The name blahblah is already assigned to 0bd3cb464ff6. You have to delete (or rename) that container to be able to assign blahblah to a container again.
  • 47.
    Inspecting the containerInspectingthe container Command Description docker ps list running containers docker ps -a list all containers docker logs [ -f 6 ] container show the container output (stdout+stderr) docker top container [ ps options ] list the processes running inside the containers docker diff container show the differences with the image (modified files) docker inspect container. . show low-level infos (in json format)
  • 48.
    Interacting with thecontainerInteracting with the container Command Description docker attach container attach to a running container (stdin/stdout/stderr) docker cp container:path hostpath|- docker cp hostpath|- container:path copy files from the container copy files into the container docker export container export the content of the container (tar archive) docker exec container args. . . run a command in an existing container (useful for debugging) docker wait container wait until the container terminates and return the exit code docker commit container image commit a new docker image (snapshot of the container)
  • 49.
    [root@docker-host ~]# dockerrun --name my-container -t -i debian root@3b397d383faf:/# cat >> /etc/bash.bashrc <<EOF > echo 'hello!' > EOF root@3b397d383faf:/# exit [root@docker-host ~]# docker start --attach my-container my-container hello! root@3b397d383faf:/# exit [root@docker-host ~]# docker diff my-container C /etc C /etc/bash.bashrc A /.bash_history C /tmp [root@docker-host ~]# docker commit my-container hello a57e91bc3b0f5f72641f19cab85a7f3f860a1e5e9629439007c39fd76f37c5dd [root@docker-host ~]# docker stop my-container; docker rm my-container my-container [root@docker-host ~]# docker run --rm -t -i hello hello! root@386ed3934b44:/# exit [root@docker-host ~]# docker images --all REPOSITORY TAG IMAGE ID CREATED SIZE debian latest a25c1eed1c6f Less than a second ago 123MB hello latest 52442a43a78b 59 seconds ago 123MB centos latest 3bee3060bfc8 46 hours ago 193MB ubuntu latest 7b9b13f7b9c0 4 days ago 118MB Lab: Docker commit exampleLab: Docker commit example
  • 50.
    Inputs/OutputsInputs/Outputs ● External volumes (persistentdata)External volumes (persistent data) ● Devices & LinksDevices & Links ● Publishing ports (NAT)Publishing ports (NAT)
  • 51.
    docker run -Mount external volumesdocker run - Mount external volumes ● -v mounts the location /hostpath from the host filesystem at the ● location /containerpath inside the container ● With the “:ro” suffix, the mount is read-only ● Purposes: ● store persistent data outside the container ● provide inputs: data, config files, . . . (read-only mode) ● inter-process communicattion (unix sockets, named pipes) docker run -v /hostpath:/containerpath[:ro] ...
  • 52.
    Lab: Mount examplesLab:Mount examples ● Persistent data [root@docker-host ~]# docker run --rm -t -i -v /tmp/persistent:/persistent debian root@0aeedfeb7bf9:/# echo "blahblah" >/persistent/foo root@0aeedfeb7bf9:/# exit [root@docker-host ~]# cat /tmp/persistent/foo blahblah [root@docker-host ~]# docker run --rm -t -i -v /tmp/persistent:/persistent debian root@6c8ed008c041:/# cat /persistent/foo blahblah [root@docker-host ~]# mkdir /tmp/inputs [root@docker-host ~]# echo hello > /tmp/inputs/bar [root@docker-host ~]# docker run --rm -t -i -v /tmp/inputs:/inputs:ro debian root@05168a0eb322:/# cat /inputs/bar hello root@05168a0eb322:/# touch /inputs/foo touch: cannot touch `/inputs/foo': Read-only file system ● Inputs (read-only volume)
  • 53.
    Lab: Mount examplescontinue ...Lab: Mount examples continue ... ● Named pipe [root@docker-host ~]# mkfifo /tmp/fifo [root@docker-host ~]# docker run -d -v /tmp/fifo:/fifo debian sh -c 'echo blah blah> /fifo' ff0e44c25e10d516ce947eae9168060ee25c2a906f62d63d9c26a154b6415939 [root@docker-host ~]# cat /tmp/fifo blah blah [root@docker-host ~]# docker run --rm -t -i -v /dev/log:/dev/log debian root@56ec518d3d4e:/# logger blah blah blah root@56ec518d3d4e:/# exit [root@docker-host ~]# cat /var/log/messages | grep blah Oct 17 15:39:39 docker-host root: blah blah blah ● Unix socket
  • 54.
    docker run-inter-container links(legacy links )docker run-inter-container links (legacy links ) ● Containers cannot be assigned a static IP address (by design) → service discovery is a must Docker “links” are the most basic way to discover a service docker run --link ctr:alias . . . ● → container ctr will be known as alias inside the new container [root@docker-host ~]# docker run --name my-server debian sh -c 'hostname -i && sleep 500' & 172.17.0.4 [root@docker-host ~]# docker run --rm -t -i --link my-server:srv debian root@d752180421cc:/# ping srv PING srv (172.17.0.4): 56 data bytes 64 bytes from 172.17.0.4: icmp_seq=0 ttl=64 time=0.195 ms
  • 55.
  • 56.
    User-defined networks (sincev1.9.0)User-defined networks (since v1.9.0) ● by default new containers are connected to the main network (named “bridge”, 172.17.0.0/16) ● the user can create additional networks: docker network create NETWORK ● newly created containers are connected to one network: docker run -t --name test-network --net=NETWORK debian ● container may be dynamically attached/detached to any ● network: docker inspect test-network | grep -i NETWORK docker network list docker network connect NETWORK test-network docker network connect bridge test-network docker network disconnect NETWORK test-network ● networks are isolated from each other, communications is possible by attaching a container to multiple networks
  • 57.
    docker run -Publish a TCP portdocker run - Publish a TCP port ● Containers are deployed in a private network, they are not ● reachable from the outside (unless a redirection is set up) docker run -p [ipaddr:]hostport:containerport docker run -t -p 9000:9000 debian ● → redirect incoming connections to the TCP port hostport of the host to the TCP port containerport of the container ● The listening socket binds to 0.0.0.0 (all interfaces) by default or to ipaddr if given
  • 59.
    Publish examplePublish example ●bind to all host addresses [root@docker-host ~]# docker run -d -p 80:80 nginx 52c9105e1520980d49ed00ecf5f0ca694d177d77ac9d003b9c0b840db9a70d62 [root@docker-host ~]# docker inspect 6174f6951f19 | grep IPAddress [root@docker-host ~]# wget -nv http://localhost/ 2016-01-12 18:32:52 URL:http://localhost/ [612/612] -> "index.html" [1] [root@docker-host ~]# wget -nv http://172.17.0.2/ 2016-01-12 18:33:14 URL:http://172.17.0.2/ [612/612] -> "index.html" [1] [root@docker-host ~]# docker run -d -p 127.0.0.1:80:80 nginx 4541b43313b51d50c4dc2722e741df6364c5ff50ab81b828456ca55c829e732c [root@docker-host ~]# wget -nv http://localhost/ 2016-01-12 18:37:10 URL:http://localhost/ [612/612] -> "index.html.1" [1] [root@docker-host ~]# wget http://172.17.0.2/ --2016-01-12 18:38:32-- http://172.17.0.2/ Connecting to 172.17.42.1:80... failed: Connection refused. ● bind to 127.0.0.1
  • 60.
    Managing docker imagesManagingdocker images ● Docker imagesDocker images ● Image management commandsImage management commands ● Example: images & containersExample: images & containers
  • 61.
    Docker imagesDocker images Adocker image is a snapshot of the filesystem + some metadata ● immutable ● copy-on-write storage ● for instantiating containers ● for creating new versions of the image (multiple layers) ● identified by a unique hex ID (hashed from the image content) ● may be tagged 8 with a human-friendly name eg: debian:wheezy debian:jessie debian:latest
  • 62.
    Image management commandsImagemanagement commands Command Description docker images docker history image docker inspect image. . . list all local images show the image history (list of ancestors) show low-level infos (in json format) docker tag image tag tag an image docker commit container image docker import url|- [tag] create an image (from a container) create an image (from a tarball) docker rmi image. . . delete images
  • 63.
    Example: images &containersExample: images & containers Scracth
  • 64.
    Example: images &containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c docker pull image Img:latest
  • 65.
    Example: images &containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c docker run –-name ctr2 img Img:latest ctr1 ctr2
  • 66.
    Example: images &containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c docker run –-name ctr3 img Img:latest ctr2 ctr3ctr1
  • 67.
    Example: images &containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c docker rm ctr1 Img:latest ctr2 ctr3
  • 68.
    Example: images &containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c docker commit ctr2 Img:latest ctr2 ctr3as2889klsy30
  • 69.
    Example: images &containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c docker commit ctr2 img:bis Img:latest ctr2 ctr3as2889klsy30 7172ahsk9212 Img:bis
  • 70.
    Example: images &containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c docker run ctr4 img Img:latest ctr2 ctr3as2889klsy30 7172ahsk9212 Img:bis ctr4
  • 71.
    Example: images &containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c docker run –-name ctr5 img:bis Img:latest ctr2 ctr3as2889klsy30 7172ahsk9212 Img:bis ctr4 ctr5
  • 72.
    Example: images &containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c Docker rm ctr2 ctr3 Img:latest as2889klsy30 7172ahsk9212 Img:bis ctr4 ctr5
  • 73.
    Example: images &containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c Docker commit ctr4 img Img:latest as2889klsy30 7172ahsk9212 Img:bis ctr4 ctr5abcd1234efgh
  • 74.
    Example: images &containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c Docker run --name ctr6 img Img:latest as2889klsy30 7172ahsk9212 Img:bis ctr4 ctr5abcd1234efgh ctr6
  • 75.
    Example: images &containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c Docker rm ctr4 Img:latest as2889klsy30 7172ahsk9212 Img:bis ctr5abcd1234efgh ctr6
  • 76.
    Example: images &containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c Docker rm ctr6 Img:latest as2889klsy30 7172ahsk9212 Img:bis ctr5abcd1234efgh
  • 77.
    Example: images &containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c Docker rmi img 7172ahsk9212 Img:bis ctr5
  • 78.
    Example: images &containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c Docker rmi img:bis Error: img:bis is reference by ctr5 7172ahsk9212 Img:bis ctr5
  • 79.
    Example: images &containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c Docker rmi -f img:bis 7172ahsk9212 ctr5
  • 80.
    Example: images &containersExample: images & containers Scracth jk2384jkl102 0273hn18si91 16297412a12c Docker rm ctr5 7172ahsk9212
  • 81.
    Images tagsImages tags ●A docker tag is made of two parts: “REPOSITORY:TAG” ● The TAG part identifies the version of the image. If not provided, the default is “:latest” [root@docker-host ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE debian latest a25c1eed1c6f Less than a second ago 123MB hello latest 52442a43a78b 13 minutes ago 123MB centos latest 3bee3060bfc8 46 hours ago 193MB ubuntu latest 7b9b13f7b9c0 5 days ago 118MB nginx latest 958a7ae9e569 7 days ago 109MB
  • 82.
    Image transfer commandsImagetransfer commands Using the registry API docker pull repo[:tag]. . . docker push repo[:tag]. . . docker search text pull an image/repo from a registry push an image/repo from a registry search an image on the official registry docker login . . . docker logout . . . login to a registry logout from a registry Manual transfer docker save repo[:tag]. . . docker load export an image/repo as a tarbal load images from a tarball docker-ssh . . . proposed script to transfer images between two daemons over ssh
  • 83.
  • 84.
    Lab: Image creationfrom a containerLab: Image creation from a container Let’s start by running an interactive shell in a ubuntu container. [root@docker-host]# curl -fsSL https://get.docker.com/ | sh [root@docker-host]# systemctl start docker [root@docker-host]# systemctl status docker [root@docker-host]# systemctl enable docker [root@docker-host ~]# docker run -ti ubuntu bash Install the figlet package in this container root@880998ce4c0f:/# apt-get update -y ; apt-get install figlet root@880998ce4c0f:/# exit Get the ID of this container using the ls command (do not forget the -a option as the non running container are not returned by the ls command). [root@docker-host]# docker ps -a Run the following command, using the ID retreived, in order to commit the container and create an image out of it. [root@docker-host ~]# docker commit 880998ce4c0f sha256:1a769da2b98b04876844f96594a92bd708ca27ee5a8868d43c0aeb5985671161
  • 85.
    Lab: Image creationfrom a containerLab: Image creation from a container Once it has been commited, we can see the newly created image in the list of available images. [root@docker-host ~]# docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE <none> <none> 1a769da2b98b 59 seconds ago 158MB ubuntu latest 7b9b13f7b9c0 6 days ago 118MB From the previous command, get the ID of the newly created image and tag it so it’s named intra-tag. [root@docker-host ~]# docker image tag 1a769da2b98b tag-intra [root@docker-host ~]# docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE tag-intra latest 1a769da2b98b 3 minutes ago 158MB ubuntu latest 7b9b13f7b9c0 6 days ago 118MB As figlet is present in our tag-intra image, the command ran returns the following output. [root@docker-host ~]# docker container run tag-intra figlet hello _ _ _ | |__ ___| | | ___ | '_ / _ | |/ _ | | | | __/ | | (_) | |_| |_|___|_|_|___/
  • 86.
    Docker builderDocker builder ● Whatis the Docker builderWhat is the Docker builder ● DockerFileDockerFile ● Docker-ComposeDocker-Compose ● Introduction to KomposeIntroduction to Kompose
  • 87.
    Docker imagesDocker images ●Docker’s builder relies on ● a DSL describing how to build an image ● a cache for storing previous builds and have quick iterations ● The builder input is a context, i.e. a directory containing: ● a file named Dockerfile which describe how to build the container ● possibly other files to be used during the build
  • 88.
    Dockerfile formatDockerfile format ●comments start with “#” ● commands fit on a single line (possibly continuated with ) ● first command must be a FROM (indicates the parent image or scratch to start from scratch)
  • 89.
    Build an imageBuildan image ● build an image from the context located at path and optionally tag it as tag docker build [ -t tag ] path ● The command: 1. makes a tarball from the content 10 of path 2. uploads the tarball to the docker daemon which will: 2.1 execute the content of Dockerfile, committing an intermediate image after each command 2.2 (if requested) tag the final image as tag
  • 90.
    Dockerfile DescriptionDockerfile Description ●Each Dockerfile is a script, composed of various commands (instructions) and arguments listed successively to automatically perform actions on a base image in order to create (or form) a new one. They are used for organizing things and greatly help with deployments by simplifying the process start-to-finish. ● Dockerfiles begin with defining an image FROM which the build process starts. Followed by various other methods, commands and arguments (or conditions), in return, provide a new image which is to be used for creating docker containers. ● They can be used by providing a Dockerfile's content - in various ways - to the docker daemon to build an image (as explained in the "How To Use" section).
  • 91.
    Dockerfile Commands (Instructions)DockerfileCommands (Instructions) ● ADD ● The ADD command gets two arguments: a source and a destination. It basically copies the files from the source on the host into the container's own filesystem at the set destination. If, however, the source is a URL (e.g. http://github.com/user/file/), then the contents of the URL are downloaded and placed at the destination. ● Example # Usage: ADD [source directory or URL] [destination directory] ADD /my_app_folder /my_app_folder
  • 92.
    Dockerfile Commands (Instructions)DockerfileCommands (Instructions) ● CMD ● The command CMD, similarly to RUN, can be used for executing a specific command. However, unlike RUN it is not executed during build, but when a container is instantiated using the image being built. Therefore, it should be considered as an initial, default command that gets executed (i.e. run) with the creation of containers based on the image. ● To clarify: an example for CMD would be running an application upon creation of a container which is already installed using RUN (e.g. RUN apt-get install …) inside the image. This default application execution command that is set with CMD becomes the default and replaces any command which is passed during the creation. ● Example # Usage 1: CMD application "argument", "argument", .. CMD "echo" "Hello docker!"
  • 93.
    Dockerfile Commands (Instructions)DockerfileCommands (Instructions) ● ENTRYPOINT ● ENTRYPOINT argument sets the concrete default application that is used every time a container is created using the image. For example, if you have installed a specific application inside an image and you will use this image to only run that application, you can state it with ENTRYPOINT and whenever a container is created from that image, your application will be the target. ● If you couple ENTRYPOINT with CMD, you can remove "application" from CMD and just leave "arguments" which will be passed to the ENTRYPOINT. ● Example: # Usage: ENTRYPOINT application "argument", "argument", .. # Remember: arguments are optional. They can be provided by CMD # or during the creation of a container. ENTRYPOINT echo # Usage example with CMD: # Arguments set with CMD can be overridden during *run* CMD "Hello docker!" ENTRYPOINT echo
  • 94.
    Dockerfile Commands (Instructions)DockerfileCommands (Instructions) ● ENV ● The ENV command is used to set the environment variables (one or more). These variables consist of “key = value” pairs which can be accessed within the container by scripts and applications alike. This functionality of docker offers an enormous amount of flexibility for running programs. ● Example: # Usage: ENV key value ENV SERVER_WORKS 4 ● EXPOSE ● The EXPOSE command is used to associate a specified port to enable networking between the running process inside the container and the outside world (i.e. the host). ● Example: # Usage: EXPOSE [port] EXPOSE 8080
  • 95.
    Dockerfile Commands (Instructions)DockerfileCommands (Instructions) ● FROM ● FROM directive is probably the most crucial amongst all others for Dockerfiles. It defines the base image to use to start the build process. It can be any image, including the ones you have created previously. If a FROM image is not found on the host, docker will try to find it (and download) from the docker image index. It needs to be the first command declared inside a Dockerfile. ● Example: # Usage: FROM [image name] FROM ubuntu
  • 96.
    Dockerfile Commands (Instructions)DockerfileCommands (Instructions) ● MAINTAINER ● One of the commands that can be set anywhere in the file - although it would be better if it was declared on top - is MAINTAINER. This non- executing command declares the author, hence setting the author field of the images. It should come nonetheless after FROM. ● Example: # Usage: MAINTAINER [name] MAINTAINER authors_name ● RUN ● The RUN command is the central executing directive for Dockerfiles. It takes a command as its argument and runs it to form the image. Unlike CMD, it actually is used to build the image (forming another layer on top of the previous one which is committed). ● Example # Usage: RUN [command] RUN aptitude install -y riak
  • 97.
    Dockerfile Commands (Instructions)DockerfileCommands (Instructions) ● USER ● The USER directive is used to set the UID (or username) which is to run the container based on the image being built. ● Example: # Usage: USER [UID] USER 751 ● VOLUME ● The VOLUME command is used to enable access from your container to a directory on the host machine (i.e. mounting it). ● Example: # Usage: VOLUME ["/dir_1", "/dir_2" ..] VOLUME ["/my_files"] ● WORKDIR ● The WORKDIR directive is used to set where the command defined with CMD is to be executed. ● Example: # Usage: WORKDIR /path WORKDIR ~/
  • 98.
    Summary Builder maincommandsSummary Builder main commands Command Description FROM image|scratch base image for the build MAINTAINER email name of the mainainer (metadata) COPY path dst copy path from the context into the container at location dst ADD src dst same as COPY but untar archives and accepts http urls RUN args. . . run an arbitrary command inside the container USER name set the default username WORKDIR path set the default working directory CMD args. . . set the default command ENV name value set an environment variable
  • 99.
    Dockerfile exampleDockerfile example ●How to Use Dockerfiles ● Using the Dockerfiles is as simple as having the docker daemon run one. The output after executing the script will be the ID of the new docker image. ● Usage: # Build an image using the Dockerfile at current location # Example: sudo docker build -t [name] . [root@docker-host ~]# docker build -t nginx_yusuf .
  • 100.
    Lab: Dockerfile exampleLab:Dockerfile example ● build an image from the context located at path and optionally tag it as tag ############################################################ # Dockerfile to build nginx container images # Based on debian latest version ############################################################ # base image: last debian release FROM debian:latest # name of the maintainer of this image MAINTAINER yusuf.hadiwinata@gmail.com # install the latest upgrades RUN apt-get update && apt-get -y dist-upgrade && echo yusuf-test > /tmp/test # install nginx RUN apt-get -y install nginx # set the default container command # −> run nginx in the foreground CMD ["nginx", "-g", "daemon off;"] # Tell the docker engine that there will be somenthing listening on the tcp port 80 EXPOSE 80
  • 101.
    Lab: Build &Run Dockerfile exampleLab: Build & Run Dockerfile example [root@docker-host nginx_yusuf]# docker build -t nginx_yusuf . Sending build context to Docker daemon 2.56kB Step 1/6 : FROM debian:latest ---> a25c1eed1c6f Step 2/6 : MAINTAINER yusuf.hadiwinata@gmail.com ---> Running in 94409ebe59ac ---> eaefc54975b7 Removing intermediate container 94409ebe59ac Step 3/6 : RUN apt-get update && apt-get -y dist-upgrade ---> Running in 425285dbf037 Get:1 http://security.debian.org jessie/updates InRelease [63.1 kB] Ign http://deb.debian.org jessie InRelease Get:2 http://deb.debian.org jessie-updates InRelease [145 kB] Get:3 http://deb.debian.org jessie Release.gpg [2373 B] Get:4 http://deb.debian.org jessie-updates/main amd64 Packages [17.6 kB] Get:5 http://security.debian.org jessie/updates/main amd64 Packages [521 kB] Get:6 http://deb.debian.org jessie Release [148 kB] Get:7 http://deb.debian.org jessie/main amd64 Packages [9065 kB] ------------------- DIPOTONG ------------------------- Processing triggers for sgml-base (1.26+nmu4) ... ---> 88795938427f Removing intermediate container 431ae6bc8e0a Step 5/6 : CMD nginx -g daemon off; ---> Running in 374ff461f187 ---> 08e1433ccd68 Removing intermediate container 374ff461f187 Step 6/6 : EXPOSE 80 ---> Running in bac435c454a8 ---> fa8de9e81136 Removing intermediate container bac435c454a8 Successfully built fa8de9e81136 Successfully tagged nginx_yusuf:latest
  • 102.
    Lab: Dockerfile exampleLab:Dockerfile example ● Using the image we have build, we can now proceed to the final step: creating a container running a nginx instance inside, using a name of our choice (if desired with -name [name]). ● Note: If a name is not set, we will need to deal with complex, alphanumeric IDs which can be obtained by listing all the containers using sudo docker ps -l. [root@docker-host nginx_yusuf]# docker run --name my_first_nginx_instance -i -t nginx_yusuf bash root@4b90e5d6dda8:/# root@4b90e5d6dda8:/# root@4b90e5d6dda8:/# cat /tmp/test yusuf-test
  • 103.
    Docker ComposeDocker Compose Managea collection of containersManage a collection of containers
  • 104.
    Dockerfile vs DockerComposeDockerfile vs Docker Compose which is better?which is better? Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application’s services. Then, using a single command, you create and start all the services from your configuration. To learn more about all the features of Compose see the list of features. Compose is great for development, testing, and staging environments, as well as CI workflows. You can learn more about each case in Common Use Cases. ● Using Compose is basically a three-step process. ● Define your app’s environment with a Dockerfile so it can be reproduced anywhere. ● Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment. ● Lastly, run docker-compose up and Compose will start and run your entire app.
  • 105.
    Lab: Docker ComposeUbuntu, php7-fpm,Lab: Docker Compose Ubuntu, php7-fpm, Nginx and MariaDB ExampleNginx and MariaDB Example ● Clone or download the sample project from Github. [root@docker-host ~]# git clone https://github.com/isnuryusuf/docker-php7.git Cloning into 'docker-php7'... remote: Counting objects: 62, done. remote: Total 62 (delta 0), reused 0 (delta 0), pack-reused 62 Unpacking objects: 100% (62/62), done. 960 B [root@docker-host ~]# cd docker-php7 [root@docker-host ~]# yum -y install epel-release [root@docker-host ~]# yum install -y python-pip [root@docker-host ~]# pip install docker-compose [root@docker-host docker-php7]# docker-compose up This lab will setup PHP application using Docker and docker- compose. This will setup a developement environment with PHP7- fpm, MariaDB and Nginx.
  • 106.
    Docker Compose Ubuntu,php7-fpm, NginxDocker Compose Ubuntu, php7-fpm, Nginx and MariaDB Exampleand MariaDB Example ● Inside docker-php7 have a directory structure like this. ├── app │ └── public │ └── index.php ├── database ├── docker-compose.yml ├── fpm │ ├── Dockerfile │ └── supervisord.conf ├── nginx │ ├── Dockerfile │ └── default.conf ● app - Our application will be kept in this directory. ● database is where MariaDB will store all the database files. ● fpm folder contains the Dockerfile for php7-fpm container and the Supervisord config ● nginx folder contains the Dockerfile for nginx and the default nginx config which will be copied to the container. ● docker-compose.yml - Our docker-compose configuration. In this file, we define the containers and services that we want to start, along with associated volumes, ports, etc. When we run docker-compose up, it reads this file and builds the images.
  • 107.
    Docker Compose Ubuntu,php7-fpm, NginxDocker Compose Ubuntu, php7-fpm, Nginx and MariaDB Exampleand MariaDB Example ● docker-compose.yml version: "2" services: nginx: build: context: ./nginx ports: - "8080:80" volumes: - ./app:/var/app fpm: build: context: ./fpm volumes: - ./app:/var/app expose: - "9000" environment: - "DB_HOST=db" - "DB_DATABASE=laravel" db: image: docker-registry:5000/mariadb:latest environment: - MYSQL_ROOT_PASSWORD=root - MYSQL_DATABASE=laravel volumes: - ./database:/var/lib/mysql
  • 108.
    Docker Compose Ubuntu,php7-fpm, NginxDocker Compose Ubuntu, php7-fpm, Nginx and MariaDB Exampleand MariaDB Example ● Docker compose up output command [root@docker-host docker-php7]# docker-compose up Creating network "dockerphp7_default" with the default driver Building fpm Step 1 : FROM ubuntu:latest Trying to pull repository docker.io/library/ubuntu ... sha256:ea1d854d38be82f54d39efe2c67000bed1b03348bcc2f3dc094f260855dff368: Pulling from docker.io/library/ubuntu bd97b43c27e3: Pull complete 6960dc1aba18: Pull complete 2b61829b0db5: Pull complete 1f88dc826b14: Pull complete 73b3859b1e43: Pull complete Digest: sha256:ea1d854d38be82f54d39efe2c67000bed1b03348bcc2f3dc094f260855dff368 Status: Downloaded newer image for docker.io/ubuntu:latest ---> 7b9b13f7b9c0 Step 2 : RUN apt-get update && apt-get install -y software-properties-common language- pack-en-base && LC_ALL=en_US.UTF-8 add-apt-repository -y ppa:ondrej/php && apt- get update && apt-get install -y php7.0 php7.0-fpm php7.0-mysql mcrypt php7.0-gd curl php7.0-curl php-redis php7.0-mbstring sendmail supervisor && mkdir /run/php && apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* ---> Running in 812423cbbeac
  • 109.
    Docker Compose Ubuntu,php7-fpm, NginxDocker Compose Ubuntu, php7-fpm, Nginx and MariaDB Exampleand MariaDB Example
  • 110.
    Introduction to KomposeIntroductionto Kompose Why developers like Docker Compose? ● Simple (to learn and adopt) ● Very easy to run containerised applications ● One line command ● Local development ● Declarative ● Great UX ● Developer friendly
  • 111.
    Introduction to KomposeIntroductionto Kompose Devs say this about Kubernetes/OpenShift ● Many new concepts to learn ● Pods, Deployment, RC, RS, Job, DaemonSets, Routes/Ingress, Volumes ... phew!!! ● Complex / complicated ● Difficult ● Difficult/Complicated UX (especially when getting started) ● Setting up local development requires work ● Not developer friendly
  • 112.
    Introduction to KomposeIntroductionto Kompose How do we bridge this gap? ● We saw an opportunity here! ● Can we reduce the learning curve? ● Can we make adopting Kubernetes/OpenShift simpler? ● What can we do make it simple? ● Which bits need to be made simple? ● How can we make it simple?
  • 113.
    Introduction to KomposeIntroductionto Kompose Enter Kompose! (Kompose with a “K”) Docker Compose to OpenShift in *one* command More info: http://kompose.io/
  • 114.
  • 115.
    The Easiest Wayto Manage DockerThe Easiest Way to Manage Docker Portainer is OpenSource lightweigh Management UI whichPortainer is OpenSource lightweigh Management UI which allows you to easly manage your docker Host or Swarmallows you to easly manage your docker Host or Swarm ClusterCluster Available on Linux, Windows & OSXAvailable on Linux, Windows & OSX
  • 116.
    Installing Portainer.ioInstalling Portainer.io ●Portainer runs as a lightweight Docker container (the Docker image weights less than 4MB) on a Docker engine or Swarm cluster. Therefore, you are one command away from running container on any machine using Docker. ● Use the following Docker command to run Portainer: [root@docker-host ~]# docker volume create portainer_data [root@docker-host ~]# docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data docker- registry:5000/portainer/portainer:latest ● you can now access Portainer by pointing your web browser at http://DOCKER_HOST:9000 Ensure you replace DOCKER_HOST with address of your Docker host where Portainer is running.
  • 117.
    Manage a newendpointManage a new endpoint ● After your first authentication, Portainer will ask you information about the Docker endpoint you want to manage. You’ll have the following choices: ● Not available for Windows Containers (Windows Server 2016) - Manage the local engine where Portainer is running (you’ll need to bind mount the Docker socket via -v /var/run/docker.sock:/var/run/docker.sock on the Docker CLI when running Portainer) ● Manage a remote Docker engine, you’ll just have to specify the url to your Docker endpoint, give it a name and TLS info if needed
  • 118.
    Declare initial endpointvia CLIDeclare initial endpoint via CLI ● You can specify the initial endpoint you want Portainer to manage via the CLI, use the -H flag and the tcp:// protocol to connect to a remote Docker endpoint: [root@docker-host ~]# docker run -d -p 9000:9000 portainer/portainer -H tcp://<REMOTE_HOST>:<REMOTE_PORT> ● Ensure you replace REMOTE_HOST and REMOTE_PORT with the address/port of the Docker engine you want to manage. ● You can also bind mount the Docker socket to manage a local Docker engine (not available for Windows Containers (Windows Server 2016)): [root@docker-host ~]# docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer ● Note: If your host is using SELinux, you’ll need to pass the --privileged flag to the Docker run command:
  • 119.
    Portainer Web Console- InitializationPortainer Web Console - Initialization
  • 120.
  • 121.
    Portainer.io Manage LocallyPortainer.ioManage Locally ● Ensure that you have started Portainer container with the following Docker flag -v "/var/run/docker.sock:/var/run/docker.sock" [root@docker-host ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES afd62e28aee5 portainer/portainer "/portainer" 5 minutes ago Up 5 minutes 0.0.0.0:9000->9000/tcp adoring_brown 60fec4b9a9bf centos "bash" 21 minutes ago Up 21 minutes high_kare [root@docker-host ~]# docker stop afd62e28aee5 afd62e28aee5 [root@docker-host ~]# docker run -v "/var/run/docker.sock:/var/run/docker.sock" -d -p 9000:9000 portainer/portainer db232db974fa6c5a232f0c2ddfc0404dfac6bd34c087934c4c51b0208ececf0f
  • 122.
  • 123.
    Portainer Documentation URLPortainerDocumentation URL https://portainer.readthedocs.io/
  • 124.
    Docker OrchestrationDocker Orchestration ● DockerMachineDocker Machine ● Docker SwarmDocker Swarm ● Docker ComposeDocker Compose ● Kubernetes & OpenshiftKubernetes & Openshift
  • 125.
    Docker / ContainerProblemsDocker / Container Problems We need more than just packing and isolationWe need more than just packing and isolation • SchedulingScheduling : Where should my containers run?: Where should my containers run? • Lifecycle and health : Keep my containers running despite failuresLifecycle and health : Keep my containers running despite failures • DiscoveryDiscovery : Where are my containers now?: Where are my containers now? • MonitoringMonitoring : What’s happening with my containers?: What’s happening with my containers? • Auth{n,z}Auth{n,z} : Control who can do things to my containers: Control who can do things to my containers • AggregatesAggregates : Compose sets of containers into jobs: Compose sets of containers into jobs • ScalingScaling : Making jobs bigger or smaller: Making jobs bigger or smaller
  • 126.
    Docker MachineDocker Machine Abstractionfor provisionning and using docker hostsAbstraction for provisionning and using docker hosts
  • 127.
    Docker SwarmDocker Swarm Managea cluster of docker hostsManage a cluster of docker hosts
  • 128.
    Swarm mode overviewSwarmmode overview ● Feature highlights ● Cluster management integrated with Docker Engine: Use the Docker Engine CLI to create a swarm of Docker Engines where you can deploy application services. You don’t need additional orchestration software to create or manage a swarm. ● Decentralized design: Instead of handling differentiation between node roles at deployment time, the Docker Engine handles any specialization at runtime. You can deploy both kinds of nodes, managers and workers, using the Docker Engine. This means you can build an entire swarm from a single disk image. ● Declarative service model: Docker Engine uses a declarative approach to let you define the desired state of the various services in your application stack. For example, you might describe an application comprised of a web front end service with message queueing services and a database backend.
  • 129.
    Swarm mode overviewSwarmmode overview ● Feature highlights ● Scaling: For each service, you can declare the number of tasks you want to run. When you scale up or down, the swarm manager automatically adapts by adding or removing tasks to maintain the desired state. ● Desired state reconciliation: The swarm manager node constantly monitors the cluster state and reconciles any differences between the actual state and your expressed desired state. For example, if you set up a service to run 10 replicas of a container, and a worker machine hosting two of those replicas crashes, the manager will create two new replicas to replace the replicas that crashed. The swarm manager assigns the new replicas to workers that are running and available. ● Multi-host networking: You can specify an overlay network for your services. The swarm manager automatically assigns addresses to the containers on the overlay network when it initializes or updates the application.
  • 130.
    Swarm mode overviewSwarmmode overview ● Feature highlights ● Service discovery: Swarm manager nodes assign each service in the swarm a unique DNS name and load balances running containers. You can query every container running in the swarm through a DNS server embedded in the swarm. ● Load balancing: You can expose the ports for services to an external load balancer. Internally, the swarm lets you specify how to distribute service containers between nodes. ● Secure by default: Each node in the swarm enforces TLS mutual authentication and encryption to secure communications between itself and all other nodes. You have the option to use self-signed root certificates or certificates from a custom root CA. ● Rolling updates: At rollout time you can apply service updates to nodes incrementally. The swarm manager lets you control the delay between service deployment to different sets of nodes. If anything goes wrong, you can roll-back a task to a previous version of the service.
  • 131.
    [root@docker-host]# curl -fsSLhttps://get.docker.com/ | sh [root@docker-host]# systemctl start docker [root@docker-host]# systemctl status docker [root@docker-host]# systemctl enable docker [root@docker-host]# docker swarm init Swarm initialized: current node (73yn8s77g2xz3277f137hye41) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-0xg56f2v9tvy0lg9d4j7xbf7cf1mg8ylm7d19f39gqvc41d1yk- 0trhxa6skixvif1o6pultvcp3 10.7.60.26:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. ● Create a Docker Swarm first [root@docker-host ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 73yn8s77g2xz3277f137hye41 * docker-host Ready Active Leader ● Show members of swarm From the first terminal, check the number of nodes in the swarm (running this command from the second terminal worker will fail as swarm related commands need to be issued against a swarm manager). Docker Swarm Lab - Init yourDocker Swarm Lab - Init your swarmswarm
  • 132.
    Docker Swarm -Clone theDocker Swarm - Clone the voting-appvoting-app [root@docker-host ~]# git clone https://github.com/docker/example-voting-app Cloning into 'example-voting-app'... remote: Counting objects: 374, done. remote: Total 374 (delta 0), reused 0 (delta 0), pack-reused 374 Receiving objects: 100% (374/374), 204.32 KiB | 156.00 KiB/s, done. Resolving deltas: 100% (131/131), done. [root@docker-host ~]# cd example-voting-app ● Let’s retrieve the voting app code from Github and go into the application folder. ● Ensure you are in the first terminal and do the below:
  • 133.
    Docker Swarm -Deploy a stackDocker Swarm - Deploy a stack [root@docker-host]# curl -fsSL https://get.docker.com/ | sh [root@docker-host]# systemctl start docker [root@docker-host]# systemctl status docker [root@docker-host]# systemctl enable docker [root@docker-host]# docker stack deploy --compose-file=docker-stack.yml voting_stack ● A stack is a group of service that are deployed together. The docker- stack.yml in the current folder will be used to deploy the voting app as a stack. ● Ensure you are in the first terminal and do the below: [root@docker-host ~]# docker stack ls NAME SERVICES voting_stack 6 ● Check the stack deployed from the first terminal
  • 134.
    Docker Swarm -Deploy a stackDocker Swarm - Deploy a stack [root@docker-host ~]# docker stack services voting_stack ID NAME MODE REPLICAS IMAGE 10rt1wczotze voting_stack_visualizer replicated 1/1 dockersample s/visualizer:stable 8lqj31k3q5ek voting_stack_redis replicated 2/2 redis:alpine nhb4igkkyg4y voting_stack_result replicated 2/2 dockersample s/examplevotingapp_result:before nv8d2z2qhlx4 voting_stack_db replicated 1/1 postgres:9.4 ou47zdyf6cd0 voting_stack_vote replicated 2/2 dockersample s/examplevotingapp_vote:before rpnxwmoipagq voting_stack_worker replicated 1/1 dockersample s/examplevotingapp_worker:latest ● Check the service within the stack [root@docker-host ~]# docker service ps voting_stack_vote ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS my7jqgze7pgg voting_stack_vote.1 dockersamples/examplevotingapp_vote:be fore node1 Running Running 56 seconds ago 3jzgk39dyr6d voting_stack_vote.2 dockersamples/examplevotingapp_vote:be fore node2 Running Running 58 seconds ago ● Check the service within the stack
  • 135.
    Docker Swarm -CreatingDocker Swarm - Creating servicesservices [root@docker-host ~]# docker service create -p 80:80 --name web nginx:latest [root@docker-host example-voting-app]# docker service ls | grep nginx 24jakxhfl06l web replicated 1/1 nginx:latest *:80->80/tcp ● The next step is to create a service and list out the services. This creates a single service called web that runs the latest nginx, type the below commands in the first terminal: [root@docker-host ~]# docker service inspect web [root@docker-host ~]# docker service scale web=15 web scaled to 15 [root@docker-host ~]# docker service ls | grep nginx 24jakxhfl06l web replicated 15/15 nginx:latest *:80->80/tcp ● Scaling Up Application
  • 136.
    Docker Swarm -CreatingDocker Swarm - Creating servicesservices [root@docker-host ~]# docker service scale web=10 web scaled to 10 [root@docker-host ~]# docker service ps web ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS jrgkmkvm4idf web.2 nginx:latest docker-host Running Running about a minute ago dmreadcm745k web.4 nginx:latest docker-host Running Running about a minute ago 5iik87rbsfc2 web.6 nginx:latest docker-host Running Running about a minute ago 7cuzpz79q2hp web.7 nginx:latest docker-host Running Running about a minute ago ql7g7k3dlbqw web.8 nginx:latest docker-host Running Running about a minute ago k0bzk7m51cln web.9 nginx:latest docker-host Running Running about a minute ago 0teod07eihns web.10 nginx:latest docker-host Running Running about a minute ago sqxfaqlnkpab web.11 nginx:latest docker-host Running Running about a minute ago mkrsmwgti606 web.12 nginx:latest docker-host Running Running about a minute ago ucomtg454jlk web.15 nginx:latest docker-host Running Running about a minute ago ● Scaling Down Application
  • 137.
    Kubernetes is aSolution?Kubernetes is a Solution? Kubernetes – Container Orchestration at ScaleKubernetes – Container Orchestration at Scale Greek for “Helmsman”; also the root of the word “Governor” and “cybernetic” • Container Cluster Manager - Inspired by the technology that runs Google • Runs anywhere - Public cloud - Private cloud - Bare metal • Strong ecosystem - Partners: Red Hat, VMware, CoreOS.. - Community: clients, integration
  • 138.
    Kubernetes Resource TypesKubernetesResource Types ● Pods ● Represent a collection of containers that share resources, such as IP addresses and persistent storage volumes. It is the basic unit of work for Kubernetes. ● Services ● Define a single IP/port combination that provides access to a pool of pods. By default, services connect clients to pods in a round-robin fashion.
  • 139.
    Kubernetes Resource TypesKubernetesResource Types ● Replication Controllers ● A framework for defining pods that are meant to be horizontally scaled. A replication controller includes a pod definition that is to be replicated, and the pods created from it can be scheduled to different nodes. ● Persistent Volumes (PV) ● Provision persistent networked storage to pods that can be mounted inside a container to store data. ● Persistent Volume Claims (PVC) ● Represent a request for storage by a pod to Kubernetes
  • 140.
    OpenShift Resource TypesOpenShiftResource Types ● Deployment Configurations (dc) ● Represent a set of pods created from the same container image, managing workflows such as rolling updates. A dc also provides a basic but extensible Continuous Delivery workflow. ● Build Configurations (bc) ● Used by the OpenShift Source-to-Image (S2I) feature to build a container image from application source code stored in a Git server. A bc works together with a dc to provide a basic but extensible Continuous Integration/Continuous Delivery workflow. ● Routes ● Represent a DNS host name recognized by the OpenShift router as an ingress point for applications and microservices.
  • 141.
    Kubernetes Solution DetailKubernetesSolution Detail Kubernetes Cluster Registry Master Node Node Storage Pod Volume Node Service Pod Pod Image Core ConceptsCore Concepts Pod • Labels & Selectors • ReplicationController • Service • Persistent Volumes etcd SkyDNS Replication Controller APIDev/Ops Visitor Router Policies Logging ELK
  • 142.
    Kubernetes: The PodsKubernetes:The Pods POD Definition: • Group of Containers • Related to each other • Same namespace • Emphemeral Examples: • Wordpress • MySQL • Wordpress + MySQL • ELK • Nginx+Logstash • Auth-Proxy+PHP • App + data-load
  • 143.
    Kubernetes: Building PodKubernetes:Building Pod { "apiVersion": "v1", "kind": "Pod", "metadata": { "name": "hello‐openshift" }, "spec": { "containers": [ { "name": "hello‐openshift", "image": "openshift/hello‐openshift", "ports": [ { "containerPort": 8080 } ] } ] } } # kubectl create –f hello-openshift.yaml # oc create –f hello-openshift.yaml ● OpenShift/Kubernetes runs containers inside Kubernetes pods, and to create a pod from a container image, Kubernetes needs a pod resource definition. This can be provided either as a JSON or YAML text file, or can be generated from defaults by oc new-app or the web console. ● This JSON object is a pod resource definition because it has attribute "kind" with value "Pod". It contains a single "container" whose name is "hello- openshift" and that references the "image" named "openshift/hello- openshift". The container also contains a single "ports", which listens to TCP port 8080.
  • 144.
    Kubernetes: List PodKubernetes:List Pod [root@centos-16gb-sgp1-01 ~]# oc get pod NAME READY STATUS RESTARTS AGE bgdemo-1-build 0/1 Completed 0 16d bgdemo-1-x0wlq 1/1 Running 0 16d dc-gitlab-runner-service-3-wgn8q 1/1 Running 0 8d dc-minio-service-1-n0614 1/1 Running 5 23d frontend-1-build 0/1 Completed 0 24d frontend-prod-1-gmcrw 1/1 Running 2 23d gitlab-ce-7-kq0jp 1/1 Running 2 24d hello-openshift 1/1 Running 2 24d jenkins-3-8grrq 1/1 Running 12 21d os-example-aspnet-2-build 0/1 Completed 0 22d os-example-aspnet-3-6qncw 1/1 Running 0 21d os-sample-java-web-1-build 0/1 Completed 0 22d os-sample-java-web-2-build 0/1 Completed 0 22d os-sample-java-web-3-build 0/1 Completed 0 22d os-sample-java-web-3-sqf41 1/1 Running 0 22d os-sample-python-1-build 0/1 Completed 0 22d os-sample-python-1-p5b73 1/1 Running 0 22d
  • 145.
    Kubernetes: ReplicationKubernetes: Replication ControllerController KubernetesCluster Master Node Node Pod Node Pod etcd Replication Controller APIDev/Ops kind: ReplicationController metadata: name: nginx spec: replicas: 2 selector: app: nginx template: metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx:v2.2 ports: - containerPort: 80 “nginx” RC Object • Pod Scaling • Pod Monitoring • Rolling updates # kubectl create –f nginx-rc.yaml
  • 146.
    Kubernetes: ServiceKubernetes: Service KubernetesCluster MySQL DB MySQL Service Definition: • Load-Balanced Virtual-IP (layer 4) • Abstraction layer for your App • Enables Service Discovery • DNS • ENV Examples: • frontend • database • api 172.16.0.1:3386 PHP 10.1.0.1:3306 10.2.0.1:3306 db.project.cluster.local Visitor <?php mysql_connect(getenv(“db_host”)) mysql_connect(“db:3306”) ?>
  • 147.
    Kubernetes: Service Continue..Kubernetes:Service Continue.. MySQL MySQL PHP 10.1.0.1:3306 10.2.0.1:3306 Master Node etcd SkyDNS APIDev/Ops “DB” Service Object Kube Proxy IPTables Kube Proxy IPTables 3. Register Service 3. Register Service 2. Watch Changes 2. Watch Changes RedirectRedirect3. Update Rule 3. Update Rule 2. Watch Changes 2. Watch Changes - apiVersion: v1 kind: Service metadata: labels: app: MySQL role: BE phase: DEV name: MySQL spec: ports: - name: mysql-data port: 3386 protocol: TCP targetPort: 3306 selector: app: MySQL role: BE sessionAffinity: None type: ClusterIP 1. Create Object 1. Create Object 1. Register Pod Object 1. Register Pod Object
  • 148.
    Kubernetes: Labels &SelectorsKubernetes: Labels & Selectors Pod Service Pod Pod - apiVersion: v1 kind: Service metadata: labels: app: MyApp role: BE phase: DEV name: MyApp spec: ports: - name: 80-tcp port: 80 protocol: TCP targetPort: 8080 selector: app: MyApp role: BE sessionAffinity: None type: ClusterIP Role: FE Phase: Dev Role: BE Phase: DEV Role: BE Phase: TST Role: BEthink SQL ‘select ... where ...’ - apiVersion: v1 kind: Pod metadata: labels: app: MyApp role: BE phase: DEV name: MyApp
  • 149.
    Kubernetes: Ingress /RouterKubernetes: Ingress / Router MySQL Service MySQL • Router Definition: • Layer 7 Load-Balancer / Reverse Proxy • SSL/TLS Termination • Name based Virtual Hosting • Context Path based Routing • Customizable (image) • HA-Proxy • F5 Big-IP Examples: • https://www.i-3.co.id/myapp1/ • http://www.i-3.co.id/myapp2/ 172.16.0.1:3386 PHP 10.1.0.1:3306 10.2.0.1:3306 db.project.cluster.local Visitor Router https://i-3.co.id/service1/apiVersion: extensions/v1beta1 kind: Ingress metadata: name: mysite spec: rules: - host: www.i-3.co.id http: paths: - path: /foo backend: serviceName: s1 servicePort: 80 - path: /bar backend: serviceName: s2 servicePort: 80
  • 150.
    Kubernetes: Router DetailKubernetes:Router Detail [root@centos-16gb-sgp1-01 ~]# oc env pod router-1-b97bv --list # pods router-1-b97bv, container router DEFAULT_CERTIFICATE_DIR=/etc/pki/tls/private ROUTER_EXTERNAL_HOST_HOSTNAME= ROUTER_EXTERNAL_HOST_HTTPS_VSERVER= ROUTER_EXTERNAL_HOST_HTTP_VSERVER= ROUTER_EXTERNAL_HOST_INSECURE=false ROUTER_EXTERNAL_HOST_INTERNAL_ADDRESS= ROUTER_EXTERNAL_HOST_PARTITION_PATH= ROUTER_EXTERNAL_HOST_PASSWORD= ROUTER_EXTERNAL_HOST_PRIVKEY=/etc/secret-volume/router.pem ROUTER_EXTERNAL_HOST_USERNAME= ROUTER_EXTERNAL_HOST_VXLAN_GW_CIDR= ROUTER_SERVICE_HTTPS_PORT=443 ROUTER_SERVICE_HTTP_PORT=80 ROUTER_SERVICE_NAME=router ROUTER_SERVICE_NAMESPACE=default ROUTER_SUBDOMAIN= STATS_PASSWORD=XXXXXX STATS_PORT=1936 STATS_USERNAME=admin ● Check the router environment variables to find connection parameters for the HAProxy process running inside the pod
  • 151.
  • 152.
    Kubernetes: Persistent StorageKubernetes:Persistent Storage Kubernetes Cluster Node Storage Pod Volume Node Pod Pod For Ops: • Google • AWS EBS • OpenStack's Cinder • Ceph • GlusterFS • NFS • iSCSI • FibreChannel • EmptyDir for Dev: • “Claim” kind: PersistentVolume metadata: name: pv0003 spec: capacity: storage: 8Gi accessModes: - ReadWriteOnce nfs: path: /tmp server: 172.17.0.2 kind: PersistentVolumeClaim metadata: name: myclaim spec: accessModes: - ReadWriteOnce resources: requests: storage: 8Gi
  • 153.
    Kubernetes: Persistent StorageKubernetes:Persistent Storage ● Kubernetes provides a framework for managing external persistent storage for containers. Kubernetes recognizes a PersistentVolume resource, which defines local or network storage. A pod resource can reference a PersistentVolumeClaim resource in order to access a certain storage size from a PersistentVolume. ● Kubernetes also specifies if a PersistentVolume resource can be shared between pods or if each pod needs its own PersistentVolume with exclusive access. When a pod moves to another node, it keeps connected to the same PersistentVolumeClaim and PersistentVolume instances. So a pod's persistent storage data follows it, regardless of the node where it is scheduled to run.
  • 154.
    Kubernetes: Persisten VolumeKubernetes:Persisten Volume ClaimClaim Storage Provider(s) Ops Dev Persistent Volume Farm Projects Claim and Mount Project: ABC Project: XYZ 10G SSD 40G pod pod 5G SSD 10G pod pod
  • 155.
    Kubernetes: NetworkingKubernetes: Networking • EachHost = 256 IPs • Each POD = 1 IP Programmable Infra: • GCE / GKE • AWS • OpenStack • Nuage Overlay Networks: • Flannel • Weave • OpenShift-SDN • Open vSwitch
  • 156.
    Kubernetes: NetworkingKubernetes: Networking ●Docker networking is very simple. Docker creates a virtual kernel bridge and connects each container network interface to it. Docker itself does not provide a way to allow a pod from one host to connect to a pod from another host. Docker also does not provide a way to assign a public fixed IP address to an application so external users can access it. ● Kubernetes provides service and route resources to manage network visibility between pods and from the external world to them. A service load-balances received network requests among its pods, while providing a single internal IP address for all clients of the service (which usually are other pods). Containers and pods do not need to know where other pods are, they just connect to the service. A route provides an external IP to a service, making it externally visible.
  • 157.
    Kubernetes: Hosting PlatformKubernetes:Hosting Platform Kubernetes Cluster Master Node Node Storage Pod Volume Node Service Pod Pod • Scheduling • Lifecycle and health • Discovery • Monitoring • Auth{n,z} • Scaling etcd SkyDNS Replication Controller APIDev/Ops Router Policies Registry Image Visitor Logging ELK
  • 158.
    Kubernetes: High AvaibilityKubernetes:High Avaibility ● High Availability (HA) on an Kubernetes/OpenShift Container Platform cluster has two distinct aspects: HA for the OCP infrastructure itself, that is, the masters, and HA for the applications running inside the OCP cluster. ● For applications, or "pods", OCP handles this by default. If a pod is lost, for any reason, Kubernetes schedules another copy, connects it to the service layer and to the persistent storage. If an entire Node is lost, Kubernetes schedules replacements for all its pods, and eventually all applications will be available again. The applications inside the pods are responsible for their own state, so they need to be HA by themselves, if they are stateful, employing proven techniques such as HTTP session replication or database replication.
  • 159.
    Authentication MethodsAuthentication Methods ●Authentication is based on OAuth , which provides a standard HTTP-based API for authenticating both interactive and non- interactive clients. – HTTP Basic, to delegate to external Single Sign-On (SSO) systems – GitHub and GitLab, to use GitHub and GitLab accounts – OpenID Connect, to use OpenID-compatible SSO and Google Accounts – OpenStack Keystone v3 server – LDAP v3 server
  • 160.
    Kubernetes: AuthorizationKubernetes: Authorization policiespolicies ●There are two levels of authorization policies: – Cluster policy: Controls who has various access levels to Kubernetes / OpenShift Container Platform and all projects. Roles that exist in the cluster policy are considered cluster roles. – Local policy: Controls which users have access to their projects. Roles that exist in a local policy are considered local roles. ● Authorization is managed using the following: – Rules: Sets of permitted verbs on a set of resources; for example, whether someone can delete projects. – Roles: Collections of rules. Users and groups can be bound to multiple roles at the same time. – Binding: Associations between users and/or groups with a role.
  • 161.
    OpenShift as aDevelopmentOpenShift as a Development PlatformPlatform Project spacesProject spaces Build toolsBuild tools Integration with your IDEIntegration with your IDE
  • 162.
    We Need morethan justWe Need more than just OrchestrationOrchestration Self Service -Templates - Web Console Multi-Language Automation - Deploy - Build DevOps Collaboration Secure - Namespaced - RBAC Scalable - Integrated LB Open Source Enterprise - Authentication - Web Console - Central Logging This past week at KubeCon 2016, Red Hat CTO Chris Wright (@kernelcdub) gave a keynote entitled OpenShift is Enterprise-Ready Kubernetes. There it was for the 1200 people in attendance: OpenShift is 100% Kubernetes, plus all the things that you’ll need to run it in production environments. - https://blog.openshift.com/enterprise-ready-kubernetes/
  • 163.
    OpenShift is RedHat ContainerOpenShift is Red Hat Container Application Platform (PaaS)Application Platform (PaaS) Self Service -Templates - Web Console Multi-Language Automation - Deploy - Build DevOps Collaboration Secure - Namespaced - RBAC Scalable - Integrated LB Open Source Enterprise - Authentication - Web Console - Central Logging https://blog.openshift.com/red-hat-chose-kubernetes-openshift/ https://blog.openshift.com/chose-not-join-cloud-foundry-foundation- recommendations-2015/
  • 164.
  • 165.
  • 166.
    OpenShift TechnologyOpenShift Technology Basiccontainer infrastructure is shown, integrated and enhanced by Red Hat – The base OS is RHEL/CentOS/Fedora. – Docker provides the basic container management API and the container image file format. – Kubernetes is an open source project aimed at managing a cluster of hosts (physical or virtual) running containers. It works with templates that describe multicontainer applications composed of multiple resources, and how they interconnect. If Docker is the "core" of OCP, Kubernetes is the "heart" that keeps it moving. – Etcd is a distributed key-value store, used by Kubernetes to store configuration and state information about the containers and other resources inside the OCP cluster.
  • 167.
    Kubernetes EmbeddedKubernetes Embedded https://master:8443/api= Kubernetes API /oapi = OpenShift API /console = OpenShift WebConsole OpenShift: • 1 Binary for Master • 1 Binary for Node • 1 Binary for Client • Docker-image • Vagrant-image Kubernetes: • ApiServer, Controller, Scheduler, Etcd • KubeProxy, Kubelet • Kubectl
  • 168.
    Project NamespaceProject Namespace ProjectProject •Sandboxed Environment • Network VXLan • Authorization Policies • Resource Quotas • Ops in Control, Dev Freedom oc new-project Project-Dev oc policy add-role-to-user admin scientist1 oc new-app --source=https://gitlab/MyJavaApp --docker-image=jboss-eap Project “Prod” Project “Dev” Project Global Services OpenShift Platform APP A Image APP C Image AppApp • Images run in Containers • Grouped together as a Service • Defined as Template
  • 169.
    CI/CD FlowCI/CD Flow Artifact Repository SCM DEVELOPER OPS QAMANAGER RELEASE MANAGER JENKINS APP TRIGGERAND BUILD PULL IMAGE PULL PULL IMAGE PULL IMAGE Project: DEV Project: UAT Project: PROD IMAGE REGISTRY PULLARTIFACT BUILD IMAGE APP BUILD PROMOTE PROMOTE IMAGE REGISTRY APP
  • 170.
    OpenShift Build &DeployOpenShift Build & Deploy ArchitectureArchitecture OpenShift Cluster Master Node Storage Pod Volume Node Service Pod Pod etcd SkyDNS Replication Controller APIDev/Ops Router Deploy Build Policies config kind: "BuildConfig“ metadata: name: “myApp-build“ spec: source: type: "Git“ git: uri: "git://gitlab/project/hello.git“ dockerfile: “jboss-eap-6“ strategy: type: "Source“ sourceStrategy: from: kind: "Image“ name: “jboss-eap-6:latest“ output: to: kind: “Image“ name: “myApp:latest“ triggers: - type: "GitHub“ github: secret: "secret101“ - type: "ImageChange“ # oc start-build myApp-build Registry Image VisitorLogging EFK
  • 171.
    Building ImagesBuilding Images ●OpenShift/Kubernetes can build a pod from three different sources – A container image: The first source leverages the Docker container ecosystem. Many vendors package their applications as container images, and a pod can be created to run those application images inside OpenShift – A Dockerfile: The second source also leverages the Docker container ecosystem. A Dockerfile is the Docker community standard way of specifying a script to build a container image from Linux OS distribution tools. – Application source code (Source-to-Image or S2I): The third source, S2I, empowers a developer to build container images for an application without dealing with or knowing about Docker internals, image registries, and Dockerfiles
  • 172.
    Build & Deployan ImageBuild & Deploy an Image Code Deploy Build Can configure different deployment strategies like A/B, Rolling upgrade, Automated base updates, and more. Can configure triggers for automated deployments, builds, and more. Source 2 Image Builder Image Developer SCM Container Image Builder Images • Jboss-EAP • PHP • Python • Ruby • Jenkins • Customer • C++ / Go • S2I (bash) scripts Triggers • Image Change (tagging) • Code Change (webhook) • Config Change
  • 173.
    OpenShit Build &DeployOpenShit Build & Deploy ArchitectureArchitecture OpenShift Cluster Master Node Storage Pod Volume Node Service Pod Pod etcd SkyDNS Replication Controller APIDev/Ops Router Deploy Build Policies kind: “DeploymentConfig“ metadata: name: “myApp“ spec: replicas: 2 selector: app: nginx template: metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80 triggers: - type: "ImageChange“ from: kind: “Image” name: “nginx:latest # oc deploy myApp --latest Registry Image VisitorLogging EFK
  • 174.
    Pop QuizPop Quiz ●What is a valid source for building a pod in OpenShift or Kubernetes (Choose three)? A)Java, Node.js, PHP, and Ruby source code B)RPM packages C)Container images in Docker format D)XML files describing the pod metadata E) Makefiles describing how to build an application F) Dockerfiles Answers the question and Win Merchandize
  • 175.
    Continous Integration PipelineContinousIntegration Pipeline ExampleExample Source Build Deploy :test :test Deploy :test-fw Test Tag :uat Deploy :uat commit webhook registry ImageChange registry ImageChange Approve Tag :prod Deploy :prod registry ImageChange ITIL container
  • 176.
    Monitoring & Inventory:Monitoring& Inventory: CloudFormCloudForm
  • 177.
  • 178.
  • 179.
    Openshift as atool forOpenshift as a tool for developersdevelopers ● Facilitate deployment and operation of web applications:Facilitate deployment and operation of web applications: ● Getting started with a web application/prototypeGetting started with a web application/prototype ● Automate application deployment, rollback changesAutomate application deployment, rollback changes ● No need to maintain a VM and its OSNo need to maintain a VM and its OS ● Switch hosting platform (container portability)Switch hosting platform (container portability) ● Good integration with code hosting (GitLab)Good integration with code hosting (GitLab) ● CI/CD pipelines (GitLab/Jenkins)CI/CD pipelines (GitLab/Jenkins) ● GitLab Review appsGitLab Review apps
  • 180.
    Openshift: Jenkins CIexampleOpenshift: Jenkins CI example
  • 181.
  • 182.
    Q & AQ& A Any Question?Any Question? Lets continue toLets continue to Openshift Lab..Openshift Lab..
  • 183.
    Installing OpenShift OriginInstallingOpenShift Origin Preparing OSPreparing OS All-In-One OpenShiftAll-In-One OpenShift Post-InstallationPost-Installation
  • 184.
    Installing Openshift OriginInstallingOpenshift Origin OpenShift: Make sure Docker installed and configuredOpenShift: Make sure Docker installed and configured Skip this step, if all requirment already matchSkip this step, if all requirment already match [root@docker-host ~]# yum install docker [root@docker-host ~]# sed -i '/OPTIONS=.*/cOPTIONS="--selinux-enabled --insecure- registry 172.30.0.0/16"' /etc/sysconfig/docker [root@docker-host ~]# systemctl is-active docker [root@docker-host ~]# systemctl enable docker [root@docker-host ~]# systemctl start docker ● When you use Latest version docker, please configure this [root@docker-host ~]# vim /usr/lib/systemd/system/docker.service Edit ExecStart=/usr/bin/dockerd to ExecStart=/usr/bin/dockerd --insecure-registry 172.30.0.0/16 --insecure-registry 192.168.1.0/24 [root@docker-host ~]# systemctl daemon-reload ; systemctl restart docker ● Command installing docker from Centos Distribution (not latest)
  • 185.
    Installing Openshift OriginInstallingOpenshift Origin ● Setting hostname at /etc/hosts file, for example: ip-address domain-name.tld [root@docker-host ~]# cat /etc/hosts | grep docker 10.7.60.26 docker-host ● Enable Centos Openshift origin repo [root@docker-host ~]# yum install centos-release-openshift-origin ● Installing Openshift Origin and Origin client [root@docker-host ~]# yum install wget git net-tools bind-utils iptables-services bridge-utils bash-completion origin-clients origin Skip all step above, if all requirment already matchSkip all step above, if all requirment already match
  • 186.
    Installing Openshift OriginInstallingOpenshift Origin OpenShift: Setting UpOpenShift: Setting Up ● Pick One, don't do all four – oc cluster up – Running in a Docker Container – Running from a rpm – Installer Installation Steps ● Refer to github.com/isnuryusuf/openshift-install/ – File: openshift-origin-quickstart.md
  • 187.
    Installing OpenShift –oc clusterInstalling OpenShift – oc cluster upup[root@docker-host ~]# oc cluster up --public-hostname=192.168.1.178 -- Checking OpenShift client ... OK -- Checking Docker client ... OK -- Checking Docker version ... OK -- Checking for existing OpenShift container ... OK -- Checking for openshift/origin:v1.5.1 image ... Pulling image openshift/origin:v1.5.1 Pulled 0/3 layers, 3% complete Pulled 0/3 layers, 19% complete Pulled 0/3 layers, 35% complete Pulled 0/3 layers, 52% complete Pulled 1/3 layers, 60% complete Pulled 1/3 layers, 75% complete Pulled 1/3 layers, 90% complete Pulled 2/3 layers, 97% complete Pulled 3/3 layers, 100% complete Extracting Image pull complete -- Checking Docker daemon configuration ... OK -- Checking for available ports ... OK -- Checking type of volume mount ... Using nsenter mounter for OpenShift volumes -- Creating host directories ... OK -- Finding server IP ... Using 10.7.60.26 as the server IP -- Starting OpenShift container ... Creating initial OpenShift configuration Starting OpenShift using container 'origin' Waiting for API server to start listening OpenShift server started -- Adding default OAuthClient redirect URIs ... OK -- Installing registry ... OK -- Installing router ... OK -- Importing image streams ... OK -- Importing templates ... OK -- Login to server ... OK -- Creating initial project "myproject" ... OK -- Removing temporary directory ... OK
  • 188.
    Installing OpenShift –firewallInstalling OpenShift – firewall " ... OK -- Removing temporary directory ... OK-- Checking container networking ... FAIL Error: containers cannot communicate with the OpenShift master Details: The cluster was started. However, the container networking test failed. Solution: Ensure that access to ports tcp/8443 and udp/53 is allowed on 10.7.60.26. You may need to open these ports on your machine's firewall. Caused By: Error: Docker run error rc=1 Details: Image: openshift/origin:v1.5.1 Entrypoint: [/bin/bash] Command: [-c echo 'Testing connectivity to master API' && curl -s -S -k https://10.7.60.26:8443 && echo 'Testing connectivity to master DNS server' && for i in {1..10}; do if curl -s -S -k https://kubernetes.default.svc.cluster.local; then exit 0; fi; sleep 1; done; exit 1] Output: Testing connectivity to master API Error Output: curl: (7) Failed connect to 10.7.60.26:8443; No route to host ● If you got error when running oc cluster up [root@docker-host ~]# oc cluster down [root@docker-host ~]# iptables -I INPUT 1 -p tcp --dport 8443 -j ACCEPT [root@docker-host ~]# iptables -I INPUT 1 -p udp --dport 53 -j ACCEPT [root@docker-host ~]# iptables -I INPUT 1 -p tcp --dport 53 -j ACCEPT [root@docker-host ~]# iptables -I INPUT 1 -p tcp --dport 443 -j ACCEPT [root@docker-host ~]# iptables -I INPUT 1 -p tcp --dport 80 -j ACCEPT [root@docker-host ~]# oc cluster up ● Running following command
  • 189.
    Installing OpenShift OriginInstallingOpenShift Origin -- Server Information ... OpenShift server started. The server is accessible via web console at: https://10.7.60.26:8443 You are logged in as: User: developer Password: developer To login as administrator: oc login -u system:admin ● Installation is success, take note on Server URL [root@docker-host ~]# oc login -u system:admin Logged into "https://10.7.60.26:8443" as "system:admin" using existing credentials. You have access to the following projects and can switch between them with 'oc project <projectname>': default kube-system * myproject openshift openshift-infra Using project "myproject". ● Test login from Command line using oc command
  • 190.
    Installing OpenShift OriginInstallingOpenShift Origin -- Server Information ... OpenShift server started. The server is accessible via web console at: https://10.7.60.26:8443 You are logged in as: User: developer Password: developer To login as administrator: oc login -u system:admin
  • 191.
    Installing OpenShift OriginInstallingOpenShift Origin -- Server Information ... OpenShift server started. The server is accessible via web console at: https://10.7.60.26:8443 You are logged in as: User: developer Password: developer To login as administrator: oc login -u system:admin
  • 192.
    Creating projectCreating project OpenShift:Test create new user and projectOpenShift: Test create new user and project [root@docker-host ~]# oc login Authentication required for https://10.7.60.26:8443 (openshift) Username: test Password: test Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> [root@docker-host ~]# oc new-project test-project Now using project "test-project" on server "https://10.7.60.26:8443". You can add applications to this project with the 'new-app' command. For example, try: oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git to build a new example application in Ruby.
  • 193.
    Origin 1Origin 1stst AppDeploymentApp Deployment OpenShift: Test create new app deploymentOpenShift: Test create new app deployment [root@docker-host ~]# oc new-app openshift/deployment-example --> Found Docker image 1c839d8 (23 months old) from Docker Hub for "openshift/deployment-example" * An image stream will be created as "deployment-example:latest" that will track this image * This image will be deployed in deployment config "deployment-example" * Port 8080/tcp will be load balanced by service "deployment-example" * Other containers can access this service through the hostname "deployment- example" * WARNING: Image "openshift/deployment-example" runs as the 'root' user which may not be permitted by your cluster administrator --> Creating resources ... imagestream "deployment-example" created deploymentconfig "deployment-example" created service "deployment-example" created --> Success Run 'oc status' to view your app. OpenShift: Monitor you deploymentOpenShift: Monitor you deployment [root@docker-host ~]# oc status In project test-project on server https://10.7.60.26:8443 svc/deployment-example - 172.30.96.17:8080 dc/deployment-example deploys istag/deployment-example:latest deployment #1 deployed about a minute ago - 1 pod View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.
  • 194.
    Origin 1Origin 1stst AppDeploymentApp Deployment OpenShift: List all resources from 1OpenShift: List all resources from 1stst deploymentdeployment [root@docker-host ~]# oc get all NAME DOCKER REPO TAGS UPDATED is/deployment-example 172.30.1.1:5000/test-project/deployment-example latest 2 minutes ago NAME REVISION DESIRED CURRENT TRIGGERED BY dc/deployment-example 1 1 1 config,image(deployment- example:latest) NAME DESIRED CURRENT READY AGE rc/deployment-example-1 1 1 1 2m NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/deployment-example 172.30.96.17 <none> 8080/TCP 2m NAME READY STATUS RESTARTS AGE po/deployment-example-1-jxctr 1/1 Running 0 2m [root@docker-host ~]# oc get pod NAME READY STATUS RESTARTS AGE deployment-example-1-jxctr 1/1 Running 0 3m [root@docker-host ~]# oc get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE deployment-example 172.30.96.17 <none> 8080/TCP 3m
  • 195.
    Origin 1Origin 1stst AppDeploymentApp Deployment OpenShift: Accessing 1OpenShift: Accessing 1stst deployment from Web Guideployment from Web Gui Login to Openshift Origin Webconsole using user/pass test/testLogin to Openshift Origin Webconsole using user/pass test/test Choose and click test-projectChoose and click test-project
  • 196.
    Origin 1Origin 1stst AppDeploymentApp Deployment OpenShift: Web Console show 1OpenShift: Web Console show 1stst deployment appdeployment app
  • 197.
    Origin 1Origin 1stst AppDeploymentApp Deployment OpenShift: from docker ps output, you can see the new dockerOpenShift: from docker ps output, you can see the new docker container is runningcontainer is running [root@docker-host ~]# docker ps | grep deployment-example be02326a13be openshift/deployment-example@sha256:ea913--dipotong--ecf421f99 "/deployment v1" 15 minutes ago Up 15 minutes k8s_deployment-example.92c6c479_deployment-example-1-jxctr_test-project_d2549bbf-4c6d- 11e7-9946-080027b2e552_6eb2de05 9989834d0c74 openshift/origin-pod:v1.5.1 "/pod" 15 minutes ago Up 15 minutes k8s_POD.bc05fe90_deployment-example-1-jxctr_test- project_d2549bbf-4c6d-11e7-9946-080027b2e552_55ba483b
  • 198.
    Origin 1Origin 1stst AppDeploymentApp Deployment OpenShift: Test your 1OpenShift: Test your 1stst App using curl and from insideApp using curl and from inside container hostcontainer host [root@docker-host ~]# oc status In project test-project on server https://10.7.60.26:8443 svc/deployment-example - 172.30.96.17:8080 dc/deployment-example deploys istag/deployment-example:latest deployment #1 deployed 23 minutes ago - 1 pod View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'. [root@docker-host ~]# curl 172.30.96.17:8080 <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Deployment Demonstration</title> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <style> HTML{height:100%;} BODY{font-family:Helvetica,Arial;display:flex;display:-webkit-flex;align- items:center;justify-content:center;-webkit-align-items:center;-webkit-box- align:center;-webkit-justify-content:center;height:100%;} .box{background:#006e9c;color:white;text-align:center;border- radius:10px;display:inline-block;} H1{font-size:10em;line-height:1.5em;margin:0 0.5em;} H2{margin-top:0;} </style> </head> <body> <div class="box"><h1>v1</h1><h2></h2></div> </body>
  • 199.
    OriginOrigin 2nd2nd App DeploymentAppDeployment Lets continue to 2Lets continue to 2ndnd App deployment, but for now, we usingApp deployment, but for now, we using Web ConsoleWeb Console 1)1) Klik “Add to Project” on Web ConsoleKlik “Add to Project” on Web Console 2)2) Choose “Import YAML / JSON”Choose “Import YAML / JSON” 3)3) Run following command:Run following command: [root@docker-host ~]# oc new-app https://github.com/openshift/ruby-hello-world -o yaml > myapp.yaml [root@docker-host ~]# cat myapp.yaml apiVersion: v1 items: - apiVersion: v1 kind: ImageStream metadata: annotations: openshift.io/generated-by: OpenShiftNewApp creationTimestamp: null labels: app: ruby-hello-world name: ruby-22-centos7 -------DIPOTONG-------------
  • 200.
    OriginOrigin 2nd2nd App DeploymentAppDeployment 1)1) Copy and Paste all output from “cat myapp.yaml” to WebCopy and Paste all output from “cat myapp.yaml” to Web Console and click “Create” butonConsole and click “Create” buton
  • 201.
    OriginOrigin 2nd2nd App DeploymentAppDeployment Check your deployment process from Web ConsoleCheck your deployment process from Web Console
  • 202.
    OriginOrigin 2nd2nd App DeploymentAppDeployment Check your deployment process from Web ConsoleCheck your deployment process from Web Console
  • 203.
    Expose your AppsExposeyour Apps Expose your deployment using domain nameExpose your deployment using domain name
  • 204.
    Expose your AppsExposeyour Apps Expose your deployment using domain nameExpose your deployment using domain name
  • 205.
    Expose your AppsExposeyour Apps Expose your deployment using domain nameExpose your deployment using domain name
  • 206.
    Expose your AppsExposeyour Apps Expose your deployment using domain nameExpose your deployment using domain name
  • 207.
    Expose your AppsExposeyour Apps Expose your deployment using domain nameExpose your deployment using domain name
  • 208.
    Expose your AppsExposeyour Apps Configure static DNS file /etc/hosts on you Laptop and allowConfigure static DNS file /etc/hosts on you Laptop and allow port 80, 443 from docker-host firewallport 80, 443 from docker-host firewall root@nobody:/media/yusuf/OS/KSEI# cat /etc/hosts | grep xip.io 10.7.60.26 deployment-example-test-project.10.7.60.26.xip.io [root@docker-host ~]# iptables -I INPUT 1 -p tcp --dport 80 -j ACCEPT [root@docker-host ~]# iptables -I INPUT 1 -p tcp --dport 443 -j ACCEPT Accessing your application from Browser LaptopAccessing your application from Browser Laptop http://deployment-example-test-project.10.7.60.26.xip.io/http://deployment-example-test-project.10.7.60.26.xip.io/
  • 209.
    Expose your AppsExposeyour Apps Expose your second Apps using command lineExpose your second Apps using command line [root@docker-host ~]# oc expose service ruby-hello-world route "ruby-hello-world" exposed [root@docker-host ~]# oc get routes | grep ruby ruby-hello-world ruby-hello-world-test-project.10.7.60.26.xip.io ruby- hello-world 8080-tcp None
  • 210.
    Origin 3Origin 3rdrd AppDeploymentApp Deployment The next way to build openshift apps is using source-to-The next way to build openshift apps is using source-to- image (s2i) here is the stepimage (s2i) here is the step 1)1) Clik “Add to project”Clik “Add to project” 2)2) Choose the Laguage, for example: PHPChoose the Laguage, for example: PHP 3)3) Use latest PHP Version: 7.0-latest and click selectUse latest PHP Version: 7.0-latest and click select 4)4) Input you App Name, for example: my-php-appInput you App Name, for example: my-php-app 5)5) On GIT Repo URL, input:On GIT Repo URL, input: https://github.com/OpenShiftDemos/os-sample-phphttps://github.com/OpenShiftDemos/os-sample-php 6)6) And click Create to start deploymentAnd click Create to start deployment
  • 211.
    Origin 3Origin 3rdrd AppDeploymentApp Deployment
  • 212.
    Origin 3Origin 3rdrd AppDeploymentApp Deployment
  • 213.
    Origin 3Origin 3rdrd AppDeploymentApp Deployment
  • 214.
    Origin 3Origin 3rdrd AppDeploymentApp Deployment https://blog.shameerc.com/2016/08/my-docker-setup-ubuntu-php7-fpm-nginx-and-mariadbhttps://blog.shameerc.com/2016/08/my-docker-setup-ubuntu-php7-fpm-nginx-and-mariadb
  • 215.
    OpenShift Others LabOpenShiftOthers Lab - Image Stream Template- Image Stream Template - Pipeline for CI/CD- Pipeline for CI/CD
  • 216.
    Install the defaultimage-Install the default image- streamsstreamsEnabling Image stream using ansibleEnabling Image stream using ansible [root@docker-host]# mkdir /SOURCE ; cd /SOURCE [root@docker-host SOURCE]# git clone https://github.com/openshift/openshift-ansible.git Cloning into 'openshift-ansible'... remote: Counting objects: 53839, done. remote: Compressing objects: 100% (47/47), done. remote: Total 53839 (delta 26), reused 43 (delta 11), pack-reused 53775 Receiving objects: 100% (53839/53839), 14.12 MiB | 930.00 KiB/s, done. Resolving deltas: 100% (32741/32741), done. [root@docker-host SOURCE]# cd openshift- ansible/roles/openshift_examples/files/examples/latest/ [root@docker-host latest]# oc login -u system:admin -n default [root@docker-host latest]# oadm policy add-cluster-role-to-user cluster-admin admin [root@docker-host latest]# for f in image-streams/image-streams-centos7.json; do cat $f | oc create -n openshift -f -; done [root@docker-host latest]# for f in db-templates/*.json; do cat $f | oc create -n openshift -f -; done [root@docker-host latest]# for f in quickstart-templates/*.json; do cat $f | oc create -n openshift -f -; done
  • 217.
    Install Jenkins PersistentInstallJenkins Persistent Install Jenkins using Image stream from ansibleInstall Jenkins using Image stream from ansible 1)1) Click “Add to project”Click “Add to project” 2)2) On Browse catalog, search for jenkins and select “JenkinsOn Browse catalog, search for jenkins and select “Jenkins Persistent”Persistent” 3)3) Leave everything default and click “Create”Leave everything default and click “Create” Ref:Ref: https://github.com/openshift/origin/blob/master/examples/jenkins/README.mdhttps://github.com/openshift/origin/blob/master/examples/jenkins/README.md Install Jenkins using Image stream from ansibleInstall Jenkins using Image stream from ansible 1)1) Click “Add to project”Click “Add to project” 2)2) On Browse catalog, search for jenkins and select “JenkinsOn Browse catalog, search for jenkins and select “Jenkins Persistent”Persistent” 3)3) Leave everything default and click “Create”Leave everything default and click “Create” Ref:Ref: https://github.com/openshift/origin/blob/master/examples/jenkins/README.mdhttps://github.com/openshift/origin/blob/master/examples/jenkins/README.md
  • 218.
  • 219.
    CI/CD Demo –OpenShift OriginCI/CD Demo – OpenShift Origin This repository includes the infrastructure and pipelineThis repository includes the infrastructure and pipeline definition for continuous delivery using Jenkins, Nexus anddefinition for continuous delivery using Jenkins, Nexus and SonarQube on OpenShift. On every pipeline execution, theSonarQube on OpenShift. On every pipeline execution, the code goes through the following steps:code goes through the following steps: 1)1) Code is cloned from Gogs, built, tested and analyzed forCode is cloned from Gogs, built, tested and analyzed for bugs and bad patternsbugs and bad patterns 2)2) The WAR artifact is pushed to Nexus Repository managerThe WAR artifact is pushed to Nexus Repository manager 3)3) A Docker image (tasks:latest) is built based on the TasksA Docker image (tasks:latest) is built based on the Tasks application WAR artifact deployed on JBoss EAP 6application WAR artifact deployed on JBoss EAP 6 4)4) The Tasks Docker image is deployed in a fresh new containerThe Tasks Docker image is deployed in a fresh new container in DEV projectin DEV project 5)5) If tests successful, the DEV image is tagged with theIf tests successful, the DEV image is tagged with the application version (tasks:7.x) in the STAGE projectapplication version (tasks:7.x) in the STAGE project 6)6) The staged image is deployed in a fresh new container in theThe staged image is deployed in a fresh new container in the STAGE projectSTAGE project
  • 220.
    CI/CD Demo –OpenShift OriginCI/CD Demo – OpenShift Origin The following diagram shows the steps included in theThe following diagram shows the steps included in the deployment pipeline:deployment pipeline:
  • 221.
    CI/CD Demo –OpenShift OriginCI/CD Demo – OpenShift Origin Follow these instructions in order to create a local OpenShiftFollow these instructions in order to create a local OpenShift cluster. Otherwise using your current OpenShift cluster,cluster. Otherwise using your current OpenShift cluster, create the following projects for CI/CD components, Dev andcreate the following projects for CI/CD components, Dev and Stage environments:Stage environments: 1)1) # oc new-project dev --display-name="Tasks - Dev"# oc new-project dev --display-name="Tasks - Dev" 2)2) # oc new-project stage --display-name="Tasks - Stage"# oc new-project stage --display-name="Tasks - Stage" 3)3) # oc new-project cicd --display-name="CI/CD"# oc new-project cicd --display-name="CI/CD"
  • 222.
    CI/CD Demo –OpenShift OriginCI/CD Demo – OpenShift Origin Follow these instructions in order to create a local OpenShift cluster.Follow these instructions in order to create a local OpenShift cluster. Otherwise using your current OpenShift cluster, create the followingOtherwise using your current OpenShift cluster, create the following projects for CI/CD components, Dev and Stage environments:projects for CI/CD components, Dev and Stage environments: 1)1) # oc new-project dev --display-name="Tasks - Dev"# oc new-project dev --display-name="Tasks - Dev" 2)2) # oc new-project stage --display-name="Tasks - Stage"# oc new-project stage --display-name="Tasks - Stage" 3)3) # oc new-project cicd --display-name="CI/CD"# oc new-project cicd --display-name="CI/CD" Jenkins needs to access OpenShift API to discover slave images as wellJenkins needs to access OpenShift API to discover slave images as well accessing container images. Grant Jenkins service account enoughaccessing container images. Grant Jenkins service account enough privileges to invoke OpenShift API for the created projects:privileges to invoke OpenShift API for the created projects: 1)1) # oc policy add-role-to-user edit system:serviceaccount:cicd:jenkins -n# oc policy add-role-to-user edit system:serviceaccount:cicd:jenkins -n devdev 2)2) # oc policy add-role-to-user edit system:serviceaccount:cicd:jenkins -n# oc policy add-role-to-user edit system:serviceaccount:cicd:jenkins -n stagestage
  • 223.
    CI/CD Demo –OpenShift OriginCI/CD Demo – OpenShift Origin 1)1) Create the CI/CD components based on the provided templateCreate the CI/CD components based on the provided template oc process -f cicd-template.yaml | oc create -f -oc process -f cicd-template.yaml | oc create -f - 2)2) To use custom project names, change cicd, dev and stage in the aboveTo use custom project names, change cicd, dev and stage in the above commands to your own names and use the following to create the demo:commands to your own names and use the following to create the demo: oc process -f cicd-template.yaml -v DEV_PROJECT=dev-project-name -voc process -f cicd-template.yaml -v DEV_PROJECT=dev-project-name -v STAGE_PROJECT=stage-project-name | oc create -f - -n cicd-project-nameSTAGE_PROJECT=stage-project-name | oc create -f - -n cicd-project-name Note: you need ~8GB memory for running this demo.Note: you need ~8GB memory for running this demo. Ref: https://github.com/isnuryusuf/openshift-cd-demoRef: https://github.com/isnuryusuf/openshift-cd-demo
  • 224.
    CI/CD Demo –OpenShift OriginCI/CD Demo – OpenShift Origin
  • 225.
    CI/CD Demo –OpenShift OriginCI/CD Demo – OpenShift Origin
  • 226.
    CI/CD Demo –OpenShift OriginCI/CD Demo – OpenShift Origin
  • 227.
    CI/CD Demo –OpenShift OriginCI/CD Demo – OpenShift Origin 1)1) Create the CI/CD components based on the provided templateCreate the CI/CD components based on the provided template oc process -f cicd-template.yaml | oc create -f -oc process -f cicd-template.yaml | oc create -f - 2)2) To use custom project names, change cicd, dev and stage in the aboveTo use custom project names, change cicd, dev and stage in the above commands to your own names and use the following to create the demo:commands to your own names and use the following to create the demo: oc process -f cicd-template.yaml -v DEV_PROJECT=dev-project-name -voc process -f cicd-template.yaml -v DEV_PROJECT=dev-project-name -v STAGE_PROJECT=stage-project-name | oc create -f - -n cicd-project-nameSTAGE_PROJECT=stage-project-name | oc create -f - -n cicd-project-name Note: you need ~8GB memory for running this demo.Note: you need ~8GB memory for running this demo. Ref: https://github.com/isnuryusuf/openshift-cd-demoRef: https://github.com/isnuryusuf/openshift-cd-demo
  • 228.
    Thnk you forComingThnk you for Coming More about me - https://www.linkedin.com/in/yusuf-hadiwinata-sutandar-3017aa41/ - https://www.facebook.com/yusuf.hadiwinata - https://github.com/isnuryusuf/ Join me on: - “Linux administrators” & “CentOS Indonesia Community” Facebook Group - Docker UG Indonesia: https://t.me/dockerid
  • 229.
  • 230.
    Other Usefull LinkOtherUsefull Link • https://ryaneschinger.com/blog/rolling-updates-kubernetes-replication-controllers-vs- deployments/ • https://kubernetes.io/docs/concepts/storage/persistent-volumes/ • http://blog.midokura.com/2016/08/kubernetes-ready-networking-done-midonet-way/ • https://blog.openshift.com/red-hat-chose-kubernetes-openshift/ • https://blog.openshift.com/chose-not-join-cloud-foundry-foundation-recommendations -2015/ • https://kubernetes.io/docs/concepts/workloads/pods/pod/ • https://blog.openshift.com/enterprise-ready-kubernetes/