Ondrej Sika (sika.io) | ondrej@sika.io | go to course -> | install docker -> | Skoleni Docker 🚀💻
Write me mail to ondrej@sika.io
Freelance DevOps Engineer, Consultant & Lecturer
- Complete DevOps Pipeline
- Open Source / Linux Stack
- Cloud & On-Premise
- Technologies: Git, Gitlab, Gitlab CI, Docker, Kubernetes, Terraform, Prometheus, ELK / EFK, Rancher, Proxmox
Feel free to star this repository or fork it.
If you found bug, create issue or pull request.
Also feel free to propose improvements by creating issues.
For sharing links & "secrets".
- Slack - https://sikapublic.slack.com/
- Microsoft Teams
- https://sika.link/chat (tlk.io)
Docker is a platform for containerization — packaging applications and their dependencies into isolated, portable units called containers.
It provides a consistent environment from a developer's laptop all the way to production, eliminating the classic "works on my machine" problem.
A virtual machine (VM) is an abstraction of physical hardware. Each VM emulates a full hardware stack — from BIOS to network adapters, storage, and CPU — which allows you to run any operating system on your host. This flexibility comes at a cost: VMs are resource-heavy and slow to start.
Containers are an abstraction at the Linux kernel level. Instead of emulating hardware, they isolate processes using kernel namespaces (PID, network, mount, IPC, etc.) and limit resource usage with cgroups (control groups).
Because containers share the host kernel, they start in milliseconds and consume far less memory than VMs. The trade-off is that you cannot run a different OS or kernel version inside a container — all containers on a host share the same kernel.
| Virtual Machines | Containers | |
|---|---|---|
| Isolation | Full OS per VM | Kernel namespaces |
| Startup time | Minutes | Milliseconds |
| Size | GBs | MBs |
| Density | Tens per host | Hundreds per host |
Docker builds on two core concepts:
- Image — a read-only, layered snapshot of a filesystem and its metadata. An image is built once and can be shared via a registry. It is immutable: running it never changes it.
- Container — a running instance of an image. Multiple containers can run from the same image simultaneously, each isolated from the others with its own writable layer.
Think of an image as a class definition and a container as an object instantiated from it.
Docker is used throughout the entire software lifecycle:
- Development — every developer runs the same environment regardless of their OS; onboarding is
docker compose upinstead of a 20-step setup guide. - Testing & CI — isolated containers make tests repeatable and parallel; each test run starts from a clean state.
- Production — the exact same image tested in CI is deployed to production, removing environment drift.
- Microservices — separate concerns into independently deployable services, each in its own container.
A single Docker host is not enough for production workloads that need high availability and horizontal scaling. Container orchestrators manage fleets of containers across multiple hosts:
- Kubernetes — the industry-standard orchestrator; handles scheduling, scaling, rolling updates, and self-healing.
Docker Swarm— Docker's built-in orchestrator; largely superseded by Kubernetes.
The 12-Factor methodology defines 12 rules for building modern, cloud-native applications. Containers and Docker naturally enforce many of these factors — for example, storing config in environment variables (factor III), treating logs as event streams (factor XI), and keeping the dev/prod environment identical (factor X).
- Official installation - https://docs.docker.com/engine/installation/
- My install instructions (in Czech) - https://ondrej-sika.cz/docker/instalace/
- Bash Completion on Mac - https://blog.alexellis.io/docker-mac-bash-completion/
docker run hello-world
You can use a remote Docker daemon over SSH. Export DOCKER_HOST pointing at the remote host and your local Docker client will transparently execute commands there.
export DOCKER_HOST=ssh://root@docker.sikademo.com
docker version
docker info
https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-docker Official Docker plugin for VS Code
https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers Official Dev Containers plugin for VS Code
docker version- print versiondocker info- system wide informationdocker system df- docker disk usagedocker system prune- cleanup unused datadocker volume prune --all- cleanup unused volumes (including named volumes)
docker pull <image>- download an imagedocker image ls- list all imagesdocker image ls -q- quiet output, just IDsdocker image ls <image>- list image versionsdocker image rm <image>- remove imagedocker image inspect <image>- show image properties
Update all local images
docker image ls --format="{{.Repository}}:{{.Tag}}" | xargs -I {} docker pull {}
Docker image name also contains location of it source. Those names can be used:
debian- Official images on Docker Hubondrejsika/debian- User (custom) images on Docker Hubghcr.io/ondrejsika/debian- Github Container Registryttl.sh/debian- Image in my own registry
Official Images
Verified Publisher Images
ghcr.io
Docker Registry is build in Gitlab and Github for no additional cost. You can find it in packages section.
You have to add registry_external_url to Your Gitlab config and reconfigure.
echo "registry_external_url 'registry.example.com'" >> /etc/gitlab/gitlab.rb
gitlab-ctl reconfigure
Harbor is an open-source container image registry under CNCF.
- https://github.com/google/go-containerregistry/tree/main/cmd/crane
- https://github.com/google/go-containerregistry/blob/main/cmd/crane/doc/crane.md
Mac
brew install crane
Linux (using slu)
slu install-bin crane
Copy image from one registry to another without pulling it to local machine.
crane copy <source> <destination>
example
crane copy ghcr.io/sikalabs/dev ttl.sh/dev
crane ls ghcr.io/sikalabs/slu
Add a new tag to an existing image in a registry without pulling it locally.
crane tag <image> <new tag>
example
crane tag ttl.sh/dev:latest ttl.sh/dev:2026-01-01
docker run [ARGS] <image> [<command>]
Basic Docker Run
docker run hello-world
With custom command
docker run debian cat /etc/os-release
docker run ubuntu cat /etc/os-release
With TTY & Standart Input
docker run -ti debian
docker container ls- list containersdocker ps- list containersdocker start <container>docker stop <container>docker restart <container>docker rm <container>- remove container
--name <name>- set container name (Wozniak easter egg)-d- run in detached mode-ti- map TTY a STDIN (for bash eg.)-e <variable>=<value>- set ENV variable
By default, if a container process stops or fails, the container is stopped.
You can choose another behavior using the --restart <policy> argument.
--restart no- never restart (default)--restart on-failure- restart only when the container exits with a non-zero exit code--restart always- always restart, even after a Docker daemon or host restart--restart unless-stopped- likealways, but a manually stopped container stays stopped across daemon restarts
To limit the number of retries for the on-failure policy, use: --restart on-failure:<count>
docker container ls- list running containersdocker container ls -a- list all containersdocker container ls -a -q- list IDs of all containers
or
docker ps- list running containersdocker ps -a- list all containersdocker ps -a -q- list IDs of all containers
Example of -q
docker rm -f $(docker ps -a -q)
or my dra (docker remove all) alias
alias dra='docker ps -a -q | xargs -r docker rm -f'
dra
Or using slu:
slu s dra
docker exec <container> <command>
Arguments
-d- run in detached mode-e <variable>=<value>- set ENV variable-ti- map TTY a STDIN (for bash eg.)-u <user>- run command by specific user
Postgres 16 example
docker run --name pg16 -e POSTGRES_PASSWORD=pg -d postgres:16
docker exec -ti pg16 bash
docker exec -ti -u postgres pg16 psql
Postgres 17 example
docker run --name pg17 -e POSTGRES_PASSWORD=pg -d postgres:17
docker exec -ti pg17 bash
docker exec -ti -u postgres pg17 psql
All containers have to log into STDOUT or STDERR.
Example of logging to STDOUT from legacy application - examples/log_to_file
See also nginx logging to STDOUT here:
docker logs [-f] [-t] <container>
Args
-f- following output (similar totail -f ...)-t- show time prefix
Examples
docker run --name loggen -d ghcr.io/sikalabs/slu:v0.95.2 slu loggen --json
docker logs loggen
docker logs -f loggen
docker logs -f loggen | jq .i
docker logs -f loggen | jq -r '"\(.i) \(.message)"'
docker logs -t loggen
docker logs -ft loggen
You can use native Docker logging or some log drivers.
For example, if you want to log into syslog, you can use --log-driver syslog.
See the logging docs: https://docs.docker.com/config/containers/logging/configure/
Log Driver options:
max-size- Max size of log file (default-1- unlimited), use for example100kfor kB,10mfor MB or1gfor GB.max-file- Nuber of log rotated files (default1)compress- Compression for rotated logs (defaultdisabled)
Examle:
docker run --name log-rotation -d --log-opt max-size=5m --log-opt max-file=2 ghcr.io/sikalabs/slu:v0.95.2 slu loggen -s 5
You can set default log driver and options in /etc/docker/daemon.json
{
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}See example: examples/filebeat
cd examples/filebeat
make
See logs in terminal, no elastic output is configured.
See example: examples/promtail
Run Loki and Grafana first
cd examples/grafana_with_loki
make
Then run Promtail
cd examples/promtail
make
Go to Grafana: http://lab0.sikademo.com:3000, configure Loki data source and explore logs.
Get lots of information about container in JSON.
docker inspect <container>
Using Go Template Language.
Examples:
docker inspect loggen --format "{{.NetworkSettings.Networks.bridge.IPAddress}}"
docker inspect loggen --format "{{.LogPath}}"
- Volumes are persistent data storage for containers.
- Volumes can be shared between containers and data are written directly to host.
CLI
docker volume- all volume management commandsdocker volume ls- list all volumesdocker volume rm <volume>- remove volumedocker volume prune- remove all not used (not bount to container) volumes
Examples
docker run -ti -v /data debiandocker run -ti -v my-volume:/data debiandocker run -ti -v $(pwd)/my-data:/data debian
docker image inspect redis --format "{{.Config.Volumes|json}}"
docker image inspect postgres:11 --format "{{.Config.Volumes|json}}"
If you want to mount your volumes read only, you have to add :ro to volume argument.
Examples
docker run -ti -v my-volume:/data:ro debiandocker run -ti -v $(pwd)/my-data:/data:ro debian
First example does't make sense read only.
docker ps -a --format '{{ .ID }}' | xargs -I {} docker inspect -f '{{ .Name }} ({{ .ID }}){{ printf "\n" }}{{ range .Mounts }}{{ printf "\n\t" }}{{ .Type }} {{ if eq .Type "bind" }}{{ .Source }}{{ end }}{{ .Name }} => {{ .Destination }}{{ end }}{{ printf "\n" }}' {}
docker ps -a --filter volume=<volume>
Example
docker ps -a --filter volume=my-volume
If you want to forward socket into container, you can also use volume. If you work with sockets, read only parameter doesn't work.
docker run -v /var/run/docker.sock:/var/run/docker.sock docker docker ps
or
docker run -v /var/run/docker.sock:/var/run/docker.sock -ti docker
You can mount your's host rootfs to container with root privileges. Everybody ho has access to docker or docker socket has root privileges on your host.
userns-remap can fix that
docker run -v /:/rootfs -ti debian
Docker can remap root user from container to hight-number user on host.
More: https://docs.docker.com/engine/security/userns-remap/
dockerd argument
dockerd --userns-remap="default"
Config /etc/docker/daemon.json
{
"userns-remap": "default"
}docker run -v /:/rootfs -ti debian cat /rootfs/etc/shadow
docker run -v /:/rootfs -ti --userns=host debian cat /rootfs/etc/shadow
Run Docker in Docker
docker run --name docker -d --privileged docker:dind
Try run any Docker command in this container:
docker exec docker docker info
docker exec docker docker image ls
docker exec docker docker run hello-world
docker exec -ti docker sh
Docker can forward specific port from container to host
docker run -p <host port>:<cont. port> <image>
You can specify an address on the host as well
docker run -p <host address>:<host port>:<cont. port> <image>
Examples
docker run -ti -p 8080:80 nginx
docker run -ti -p 127.0.0.1:8080:80 nginx
The latter will make connection possible only from localhost.
Dockerfile is preferred way to create images.
Dockerfile defines each layer of image by some command.
To make image use command docker build
FROM <image>- define base imageRUN <command>- run command and save as layerCOPY <local path> <image path>- copy file or directory to image layerADD <source> <image path>- instead of copy, archives added by add are extractedENV <variable> <value>- set ENV variableUSER <user>- switch userWORKDIR <path>- change working directoryVOLUME <path>- define volumeCMD <command>- command we want to run on container start up. Difference betweenCMDandENTRYPOINTwill be exaplain laterEXPOSE <port>- Define on which port the conteiner will be listening
- Ignore files for docker build process.
- Similar to
.gitignore
Example of .dockerignore for Next.js (Node) project
Dockerfile
out
node_modules
.DS_Store
docker build <path> -t <image>- build imagedocker build <path> -f <dockerfile> -t <image>docker tag <source image> <target image>- rename docker image
Use --platform for cross platform builds
Build AMD64
docker build --platform linux/amd64 .
Build ARM64 (for Apple Silicon)
docker build --platform linux/arm64 .
See Simple Image example
git clone https://github.com/ondrejsika/docker-training
cd docker-training/examples/simple-image
rm Dockerfile Dockerfile.debian
List images and see the difference in image sizes
docker image ls simple-image
Hadolint is Dockerfile linter.
Github: https://github.com/hadolint/hadolint
Install on Mac
brew install hadolint
Install on Linux using slu
slu install-bin hadolint
Use hadolint
hadolint <dockerfile>You can ignore checks & specify trusted registries
hadolint --ignore DL3003 --ignore DL3006 <dockerfile> # exclude specific rules
hadolint --trusted-registry registry.sikademo.com <dockerfile>You can also use Hadolint from Docker
docker run --rm -i hadolint/hadolint < Dockerfile
docker run --rm -i hadolint/hadolint hadolint --ignore DL3006 - < Dockerfile
or (for PowerShell)
cat Dockerfile | docker run --rm -i hadolint/hadolint
cat Dockerfile | docker run --rm -i hadolint/hadolint hadolint --ignore DL3006 -
Example in Dockerfile
ARG FROM_IMAGE=debian:9
FROM $FROM_IMAGEFROM debian
ARG PYTHON_VERSION=3.7
RUN apt-get update && \
apt-get install python==$PYTHON_VERSIONBuild using
docker build \
--build-arg FROM_IMAGE=python .
docker build .
docker build \
--build-arg PYTHON_VERSION=3.6 .
See Build Args example.
FROM java-jdk AS build
RUN gradle assembly
FROM java-jre
COPY --from=build /build/demo.jar .# By default, last stage is used
docker build -t <image> <path>
# Select output stage
docker build -t <image> --target <stage> <path>Examples
docker build -t app .
docker build -t build --target build .
See Multistage Image example
cd ../multistage-image
rm Dockerfile
docker image ls multistage-image
Source: https://github.com/dotnet/dotnet-docker/tree/master/samples/aspnetapp
git clone https://github.com/dotnet/dotnet-docker.git
cd samples/aspnetapp
docker build -t dotnet-example .
docker run -ti -p 8000:80 dotnet-example
Idea of entrtypoint is to run something before the main command.
It's done by "entrypoint" script which do some initialization and then run the main command.
The ENTRYPOINT and CMD instructions in a Dockerfile work together to define what command gets executed when running a container.
Dockerfile
ENTRYPOINT ["/entrypoint.sh"]
CMD ["myapp"]entrypoint.sh
#!/bin/sh
echo "Running entrypoint script..."
exec "$@"When Docker continer starts, it will execute /entrypoint.sh myapp. The script will run the inicialization and then use exec "$@" to replace itself with the myapp command.
Docker support those network drivers:
- bridge (default)
- host
- none
- custom (bridge)
docker run debian:10 ip a
or using slu ipl
docker run ghcr.io/sikalabs/dev slu ipl
docker run --net host debian:10 ip a
or using slu ipl
docker run --net host ghcr.io/sikalabs/dev slu ipl
docker run --net none debian:10 ip a
or using slu ipl
docker run --net none ghcr.io/sikalabs/dev slu ipl
docker network lsdocker network create <network>docker network rm <network>
Example:
docker network create -d bridge my_bridge
Run & Add Containers:
# Run on network
docker run -d --net=my_bridge --name nginx nginx
docker run -d --net=my_bridge --name apache httpd
# Connect to network
docker run -d --name nginx2 nginx
docker network connect my_bridge nginx2Test the network
docker run -ti --net my_bridge ondrejsika/host nginx
docker run -ti --net my_bridge ondrejsika/host apache
docker run -ti --net my_bridge ondrejsika/curl nginx
docker run -ti --net my_bridge ondrejsika/curl apache
If you need assign IP addresses from your local network directly to containers, you have to use Macvlan.
https://docs.docker.com/network/macvlan/
docker network create -d macvlan \
--subnet=192.168.101.0/24 \
--ip-range=192.168.101.128/25 \
--gateway=192.168.101.1\
-o parent=eth0 macvlan
- Create the macvlan network
docker network create -d macvlan \
--subnet=192.168.54.0/24 \
--ip-range=192.168.54.128/25 \
--gateway=192.168.54.1 \
-o parent=eno1 \
macvlan54- (optional) Fix host <-> container communication
ip link add macvlan-host link eno1 type macvlan mode bridge
ip addr add 192.168.54.129/32 dev macvlan-host
ip link set macvlan-host up
ip route add 192.168.54.128/25 dev macvlan-host- Run containers
docker run -d \
--network macvlan54 \
--ip 192.168.54.200 \
--name my-200 \
-e TEXT="192.168.54.200" \
ghcr.io/sikalabs/hello-world-serverdocker run -d \
--network macvlan54 \
--ip 192.168.54.201 \
--name my-201 \
-e TEXT="192.168.54.201" \
ghcr.io/sikalabs/hello-world-server- Test
curl http://192.168.54.200:8000
curl http://192.168.54.201:8000Note: The ip link / ip route commands (step 2) are lost on reboot. To make them persistent add them to /etc/network/interfaces or a systemd service.
ctop is a top-like interface for container metrics.
Mac
brew install ctop
Using slu
slu install-bin ctop
Official installation instructions: https://github.com/bcicen/ctop#install
ctop
or with Docker
docker run --rm -ti \
--name=ctop \
--volume /var/run/docker.sock:/var/run/docker.sock \
quay.io/vektorlab/ctop:latest
Portainer is a web UI for Docker.
Homepage: portainer.io
docker run -d --name portainer -p 8000:8000 -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer
docker run \
-d \
--name portainer \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
--label=traefik.enable=true \
--label=traefik.frontend.rule=Host:portainer.lab0.sikademo.com \
--label=traefik.port=9000 \
--net traefik \
portainer/portainer
See: https://portainer.lab0.sikademo.com
https://github.com/ondrejsika/docker-compose-examples/tree/master/portainer
Nixery.dev provides ad-hoc container images that contain packages from the Nix package manager. Images with arbitrary packages can be requested via the image name.
More at https://nixery.dev/
docker run nixery.dev/hello hello
docker run -ti nixery.dev/htop htop
docker run -ti nixery.dev/shell/git/curl/mc bash
google/cadvisor (homepage)
cAdvisor (Container Advisor) provides container users an understanding of the resource usage and performance characteristics of their running containers. It is a running daemon that collects, aggregates, processes, and exports information about running containers. Specifically, for each container it keeps resource isolation parameters, historical resource usage, histograms of complete historical resource usage and network statistics. This data is exported by container and machine-wide.
Install:
# use the latest release version from https://github.com/google/cadvisor/releases
VERSION=v0.49.2
docker run \
--volume=/:/rootfs:ro \
--volume=/var/run:/var/run:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:ro \
--volume=/dev/disk/:/dev/disk:ro \
--publish=8080:8080 \
--detach=true \
--name=cadvisor \
--privileged \
--device=/dev/kmsg \
gcr.io/cadvisor/cadvisor:$VERSION
Check out:
- Web UI - http://127.0.0.1:8080/
- /metrics (prometheus) - http://127.0.0.1:8080/metrics
Run behind Traefik v1
VERSION=v0.46.0
docker run \
--volume=/:/rootfs:ro \
--volume=/var/run:/var/run:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:ro \
--volume=/dev/disk/:/dev/disk:ro \
--detach=true \
--name=cadvisor \
--privileged \
--device=/dev/kmsg \
--label=traefik.enable=true \
--label=traefik.frontend.rule=Host:cadvisor.lab0.sikademo.com \
--label=traefik.port=8080 \
--net=traefik \
gcr.io/cadvisor/cadvisor:$VERSION
See: https://cadvisor.lab0.sikademo.com
https://github.com/ondrejsika/docker-compose-examples/tree/master/cadvisor
- https://github.com/GoogleContainerTools/distroless
- examples/distroless_python
- examples/distroless_flask (multistage)
- distroless/examples/python3
- distroless/examples/python3-requirements
That's it. Do you have any questions? Let's go for a beer!
Docker Compose is a tool for defining and running multi-container Docker applications.
With Docker Compose, you use a Compose file to configure your application's services.
docker-compose is the old command. docker compose is the new command. Both commands are the same.
docker-compose.yml is the old name. compose.yml is the new name. Both names are the same.
docker compose version
services:
app:
build: .
ports:
- 8000:80
redis:
image: redisHere is a compose file reference: https://docs.docker.com/reference/compose-file/
Here is a nice tutorial for YAML: https://learnxinyminutes.com/docs/yaml/
Service is a container running and managed by Docker Compose.
docker compose up [ARGS] [<service>, ...]
Example
docker compose up
Just build, don't run
docker compose build
Build without cache
docker compose build --no-cache
Build with args
docker compose build --build-arg BUILD_NO=53
Just pull & run image
services:
app:
image: redisSimple, just build path
services:
app:
build: .Extended form with every build configuration
services:
app:
build:
context: ./app
dockerfile: ./app/docker/Dockerfile
args:
BUILD_NO: 1
image: ttl.sh/appInline Dockerfile
services:
example:
build:
dockerfile_inline: |
FROM debian:12-slim
CMD ["echo", "Hello from inline Dockerfile"]Platoform specific build
services:
example:
build: .
platform: linux/amd64services:
app:
ports:
- 8000:80
- 127.0.0.1:80:80Volumes are very similar but there is a little difference
services:
app:
volumes:
- /data1
- data:/data2
- ./data:/data3
volumes:
data:services:
app:
command: ["python", "app.py"]services:
app:
environment:
RACK_ENV: development
SHOW: "true"
SESSION_SECRET:ENV Files
services:
app:
env_file:
- default.env
- prod.envDocker Compose uses standart bash variable substitution
services:
app:
image: ${IMAGE:-ondrejsika/go-hello-world:3}
services:
app:
image: ${IMAGE?Environment variable IMAGE is required}
x-base: &base
image: debian
command: ["env"]
services:
en:
<<: *base
environment:
HELLO: Hello
cs:
<<: *base
environment:
HELLO: Ahojservices:
app:
deploy:
replicas: 4name: "compose-name-example"git clone https://github.com/ondrejsika/docker-training.git example--simple-compose
cd example--simple-compose/examples/simple-compose
rm Dockerfile docker-compose.ymlTry it wihout Docker Compose. Run the example:
docker build -t counter .
docker network create counter
docker run --name redis -d --net counter -v redis-data:/data redis
docker run --name counter -d --net counter -p 80:80 -e REDIS=redis counter
Stop & Remove
docker stop counter redis
docker rm counter redis
docker network rm counter
Now, we can create Docker compose and Compose File manually.
Create Dockerfile:
FROM python:3.7-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "app.py"]
EXPOSE 80Try without Docker Compose
docker build -t counter .
docker network create counter
docker run --name redis -d --net counter redis
docker run --name counter -d --net counter -p 8000:80 counter
docker stop counter redis
docker rm counter redis
docker network rm counter
Create docker-compose.yml:
services:
counter:
build: .
image: ttl.sh/examples/simple-compose/counter
ports:
- ${PORT:-80}:80
depends_on:
- redis
redis:
image: redis
docker compose config- validate & see final docker compose yamldocker compose ps- see all composite's containersdocker compose exec <service> <command>- run something in containerdocker compose version- see version ofdocker-composebinarydocker compose logs [-f] [<service>]- see logs
-d- run in detached mode--force-recreate- always create new cont.--build- build on every run--no-build- don't build, even images not exist--remove-orphans
docker compose start [<service>]docker compose stop [<service>]docker compose restart [<service>]docker compose kill [<service>]
docker compose up
- run all services (or multiple selected services)
- you can't specify command, volums, environment from cli arguments
docker compose run
- run only one service
- run dependencies on background
- you can specify command, volums, environment from cli arguments
docker compose down
docker compose up --scale <service>=<n>
- https://traefik.io/traefik/
- https://github.com/traefik/traefik
- https://github.com/ondrejsika/ondrejsika-docker-traefik
- https://doc.traefik.io/traefik/reference/dynamic-configuration/docker/
If you want override your compose.yaml (former docker-compose.yaml), you can use -f param for multiple compose files. You can also create compose.override.yaml (former docker-compose.override.yaml) which will be used automatically.
See compose_override example.
apt-get install podman
useradd -m -s /bin/bash podman
su - podman
podman run hello-world
podman run -p 8000:8000 ghcr.io/sikalabs/hello-world-server
Shortnames project is collecting registry alias names for shortnames to fully specified container image names.
https://github.com/containers/shortnames/blob/main/shortnames.conf
See your shortnames
cat /etc/containers/registries.conf.d/shortnames.conf
That's it. Do you have any questions? Let's go for a beer!
- email: ondrej@sika.io
- web: https://sika.io
- twitter: @ondrejsika
- linkedin: /in/ondrejsika/
- Newsletter, Slack, Facebook & Linkedin Groups: https://join.sika.io
Do you like the course? Write me recommendation on Twitter (with handle @ondrejsika) and LinkedIn (add me /in/ondrejsika and I'll send you request for recommendation). Thanks.
Wanna to go for a beer or do some work together? Just book me :)
- https://github.com/sika-training-examples/2025-12-02_counter_project_docker_example - Docker Compose project with Gitlab CI
- https://github.com/sika-training-examples/2025-12-02_docker_caddy_cloudflare_example - Caddy with Cloudflare DNS challenge example (for infra behind VPN)
- https://github.com/ondrejsika/docker-training-example-platform (created on the course)
- https://github.com/sika-training-examples/2025-10-21_docker_compose_with_gitlab_ci
- https://github.com/sika-training-examples/2025-10-21_docker_compose_example
If you see something like that, it may be caused by DNS server trouble.
You can check see your DNS server using:
docker run debian cat /etc/resolv.conf
Or check if it works:
docker run ondrejsika/host google.com
You can fix it by setting Google or Cloudflare DNS to /etc/docker/daemon.json:
{ "dns": ["1.1.1.", "8.8.8.8"] }






