Powered By Blogger

Wednesday, November 4, 2015

Run a RHEL7 docker registry in a container

I've been doing some testing and research surrounding the RHEL7 docker-registry service, a private Docker registry server, and decided that it would be a good experiment to run it as a container.

To accomplish this, I had to determine how the docker-registry package and service run, and how they are configured.

So on my RHEL7 host I started by installing the docker-registry package from the extras repo:
yum install -y docker-registry

I took a look inside the systemd service unit for docker-registry, which was located at /usr/lib/systemd/system/docker-registry.service, since all the requirements for starting the process would be held here.

Contents of /usr/lib/systemd/system/docker-registry.service:
[Unit]
Description=Registry server for Docker

[Service]
Type=simple
Environment=DOCKER_REGISTRY_CONFIG=/etc/docker-registry.yml
EnvironmentFile=-/etc/sysconfig/docker-registry
WorkingDirectory=/usr/lib/python2.7/site-packages/docker-registry
ExecStart=/usr/bin/gunicorn --access-logfile - --max-requests 100 --graceful-timeout 3600 -t 3600 -k gevent -b ${REGISTRY_ADDRESS}:${REGISTRY_PORT} -w $GUNICORN_WORKERS docker_registry.wsgi:application
Restart=on-failure

[Install]
WantedBy=multi-user.target

As you can see, there's a yaml config file and an environment file (overriding some settings from the yaml.)
There's a working directory, and a startup command.

So once I realized the /etc/sysconfig/docker-registry environment file was overriding settings in the /etc/docker-registry.yml, it was easy to move this into a Dockerfile to build out a container.

Contents of /etc/sysconfig/docker-registry:
# The Docker registry configuration file
# DOCKER_REGISTRY_CONFIG=/etc/docker-registry.yml

# The configuration to use from DOCKER_REGISTRY_CONFIG file
SETTINGS_FLAVOR=local

# Address to bind the registry to
REGISTRY_ADDRESS=0.0.0.0

# Port to bind the registry to
REGISTRY_PORT=5000

# Number of workers to handle the connections
GUNICORN_WORKERS=8

As you'll see, I move the envrionment file settings into docker ENV lines.  I am also overriding where the docker-registry stores it's data, and making the docker registry searchable.  When I run the container I'll be bind mounting a path from my docker host to the docker-registry container.  When the container is shut off, my data will still be there, and it will also come back when I restart my docker-registry container.

I'm not running yum update -y, because there's been some failing dependencies on systemd-libs packages in the latest published containers.

Contents of mydocker-registry Dockerfile:
FROM rhel7:latest
RUN yum --enablerepo=rhel-7-server-extras-rpms install docker-registry -y && \
    yum clean all;
EXPOSE 5000
ENV DOCKER_REGISTRY_CONFIG /etc/docker-registry.yml
ENV SETTINGS_FLAVOR local
ENV REGISTRY_ADDRESS 0.0.0.0
ENV REGISTRY_PORT 5000
ENV GUNICORN_WORKERS 8
ENV SEARCH_BACKEND sqlalchemy
ENV STORAGE_PATH /mnt/registry
WORKDIR /usr/lib/python2.7/site-packages/docker-registry
CMD /usr/bin/gunicorn --access-logfile - --max-requests 100 --graceful-timeout 3600 -t 3600 -k gevent -b ${REGISTRY_ADDRESS}:${REGISTRY_PORT} -w $GUNICORN_WORKERS docker_registry.wsgi:application

Build it:
docker build -t mydocker-registry .

Run it:
docker run --name docker-registry --rm -v /mypath/docker-registry-storage:/mnt/registry:Z -p 5000:5000 mydocker-registry

Now you can tag and push to it.

Tag one of your other images:
Run: 'docker images'
Note one of the id's.

docker tag <container id> localhost:5000/mynewtag

Push it:
docker push localhost:5000/mynewtag

Search for it:
docker search localhost:5000/

This should return the image you just pushed into the registry.


This is an insecure docker-registry setup, and authentication is not configured.  To connect to this docker registry from another docker host to pull or push you must make a change on that system.

To push/pull/search from another docker host:
Edit /etc/sysconfig/docker
Uncomment the line:
INSECURE_REGISTRY='--insecure-registry'

Modify the line to contain:
INSECURE_REGISTRY='--insecure-registry <the ip/hostname where your docker-registry container runs>:5000'

Save it and run: 'systemctl restart docker'

Now you can do a 'docker pull <ip/hostname where your docker-registry container runs>:5000/mynewtag'

You can also take the container one step further and configure it as a systemd service unit.

Monday, November 2, 2015

Real-world Docker Series: Conclusion

I hope you've enjoyed all the shared knowledge on getting real-world docker hosts configured and understanding some of the considerations with systems administration and docker.  This series was intended to be a quick reference for some of the deeper concerns with bringing docker containers to a production environment, and a collection of knowledge that was at times hard to put together.

What I've laid out in this docker series would work in production for a small set of docker hosts and containers residing on each host.  Think of each docker host as VMWare ESX without vCenter.

If you need something more robust with orchestration, clustering, load-balancing, and scaling features, then you will need to start looking at kubernetes.  Kubernetes is a real-world answer to all of these problems.  Red Hat takes it even further with products like OpenShift v3 and upcoming Atomic Enterprise Platform.

Real-world Docker Series: Using selinux with docker

Docker can safely and easily run in conjunction with selinux.  To ensure you're setup for selinux support, check the following:
'getenforce' - Are you enforcing/permissive/disabled?  You should be enforcing.  if not, run 'setenforce 1'.
'yum list installed docker-selinux*' - If nothing returns, then you're missing the selinux components and need to install them.  Run 'yum install -y docker-selinux' to resolve.
'cat /etc/sysconfig/docker | grep OPTIONS' - You should see 'OPTIONS='--selinux-enabled''.  If not make the change and restart the docker daemon: 'systemctl restart docker'.

Docker works auto-magically with selinux to enhance your system's security.  The only things you need to do to properly work with the tool are to understand the switches involved with bind mounting storage.  Visit the Bind Mounting Storage & Ports and Working with NFS mounts articles if you haven't already to understand the caveats with selinux and docker storage.

Next: Conclusion

Real-world Docker Series: Run Dedicated Containers as systemd services

Since running docker run commands to start up many containers when a docker host comes online would become impractical and extremely tedious, we can move these commands into systemd service unit files.

To create your own systemd service unit files, you must store the files in /etc/systemd/system.
It's a good practice to name them by prefixing the service name with docker to easily identify what they are later.
Ex: /etc/systemd/system/docker.<containername>.service

Here is an example of a systemd service for a container:
[Unit]
Description=Nginx Container 1
After=docker.service
Requires=docker.service

[Service]
TimeoutStartSec=0
Restart=always
ExecStartPre=-/usr/bin/docker stop Cont1
ExecStartPre=-/usr/bin/docker rm Cont1
ExecStart=/usr/bin/docker run --name Cont1 --rm -v /root/cont1:/usr/share/nginx/html:Z -p 80:80 nginx

[Install]
WantedBy=multi-user.target

If you're unfamiliar with systemd services, there's a few things to notice here.
After: This makes sure the service is started after the docker service.
Requires: This will not run if there is no docker service.
TimeoutStartSec: 0 implies we will not timeout when trying to start this service
Restart: Start the service's process (ExecStart) if it ends
ExecStartPre: You can have many “pre” commands. Notice the '-' at the beginning of the command. This tells the service that if this command fails, move on and do not fail at starting the service. This is important mainly to the restart aspect of this service. The first “pre” command ensures that the container named Cont1 is stopped. The second command ensures it's removed any possible orphaned containers named Cont1.
ExecStart: This is where your docker run command goes, and is the main process that systemd will be watching.
[Install] and WantedBy are replacements for the old chkconfig runlevels. Multi-user.target is the equivalent of run level 3. When we run systemctl enable docker.cont1.service, it will be installed (enabled) at run-level 3.

Caveats:
docker stop <container name> will stop the container temporarily, but systemd will automatically restart the container based on the service we created. Test by running docker stop <container name> then run docker ps. Notice the time the container has been up.

To truly stop the running container, do so by stopping the associated service you created for it.

 systemctl stop <docker.MyContainer'sService>


This is where you may want to introduce some of the power of cgroups (control groups.)  Cgroups allow the administrator to control how much of the host's resources are taken up by child processes.  You would want to add any restrictions for the container's in its service.  This is outside the scope of this series, but you should certainly check out Red Hat's documentation on the topic of cgroups as it pertains to RHEL7, as it has changed significantly with the implementation of systemd in RHEL7.

Next: How to use selinux with docker

Real-world Docker Series: NFS Tree/Hierarchy considerations

When building out the directory structure/tree design, it is important to do this in a sensible way.
Something like:

NFS1
|
|_HOST1
| |_Container1
| | |_webapp
| | |_logs
| |_Container2
| | |_webapp
| | |_logs
|_HOST2
| |_Container3
| | |_webapp
| | |_logs
| |_Container4
| | |_webapp
| | |_logs
|_HOST3
| |_Container5
| | |_webapp
| | |_logs
| |_Container6
| | |_webapp
| | |_logs
|_HOST4
| |_Container7
| | |_webapp
| | |_logs
| |_Container8
| | |_webapp
| | |_logs
|_SharedContent



In the tree design of this example, we're assuming each container will have a different code base for each container's running web application.  If they were all running the same application, you could for example, omit the webapp directory for each container, and mount up the same application with content placed in the SharedContent directory off the root of the NFS mount to the necessary web root of the container. 

Think this through when building out your environment.  Systems that perform orchestration are much better suited for what we're showing here, and you should definitely read about kubernetes.

Real-world Docker Series: Working with NFS Mounts

After seeing how to bind mount storage, you're probably wondering, “How can I store data from a container on a NFS mounted device?”

There are 2 ways to accomplish this properly with selinux:
1.) There is a selinux boolean: virt_sandbox_use_nfs
To check the status of this boolean, you can run:
getsebool virt_sandbox_use_nfs
If the status of the boolean is off, then you can turn it on by running:
setsebool -PV virt_sandbox_use_nfs on #Persistent and Verbose on Errors
Now run getsebool virt_sandbox_use_nfs again to verify it's now on.

When bind mounting storage on the NFS mount, you will now need to drop the :z and :Z options.

This now allows the containers to be able to access any of the docker host's mounted NFS volumes when directed to.
2.) Setting the appropriate selinux file context as a mount option. This is accomplished by adding the selinux context required for docker container data to the /etc/fstab NFS mount options.
vi /etc/fstab and find the appropriate NFS mount. Append to the entry's options: context=”system_u:object_r:svirt_sandbox_file_t” and save the fstab.
Unless you are running a NFS v4.2 server and NFS v4.2 client you will need to drop the :z and :Z options from your docker run command. NFS v4.2 supports contexts properly and can properly store the file contexts.

Method 2 is considered more secure, since you are allowing possible access to only a specified NFS volume rather than all of them as seen in method 1.



Real-world Docker Series: Bind Mounting Persistent Storage & Ports

Bind Mounting Storage:
Docker containers do not retain data once they've stopped running. If you want to keep any data generated by a container, you must bind storage from the docker host to the docker container.

To accomplish this, all you need is a valid path on the docker host. When running the container, you will specify where to bind this path.  If you are already familiar with virtualization technology, think of this as assigning a new virtual disk to a virtual machine, and then mounting it on a particular path.  With bind mounting, we're simply mounting a path on the host to a path on the container.

When using selinux, you must specify a :z option for a path shared by many containers (ex: web application's www root.) If the data is specific to a single container, (think log files in /var) you will use the :Z option. You can have multiple volume bind mounts.

Run:
docker run --name Cont1 --rm -v /root/cont1:/usr/share/nginx/html:Z nginx
Command Breakdown:
–name Cont1 is the unique run-time name we've assigned to (in this example) the nginx container (tag)
--rm Remove the container (after it's work is done i.e. process it performs ends)
-v Volume, /root/cont1 (path on docker host) /usr/share/nginx/html (where to mount on the container)

From this example you see that we bind mounted /root/cont1 to the Cont1 container with tag nginx.

The --volumes-from=<container id> (obtain container id from 'docker ps')  This will bind mount all the mounted volumes from a running container to another container.  You can specify :ro (read-only) or :rw (read-write) to override the container's mounting settings.  The default is to inherit the settings of the volumes from the container you're bind mounting from.

Bind Mounting Ports:
Docker containers ports are not automatically exposed.  Furthermore, docker containers run on a private docker network bridge.  Furthermore, based on the configuration of the docker container's process, you will need to map a docker host port to the container's port where the process is running.  On top of that, docker will automatically create iptables rules for you, based upon your port binding command.

Port binding is completed with the -p switch.  See your container's documentation to configure a particular service port.  In this example we're using the nginx container from hub.docker.com, and the default port on the container is 80.

We'll add on to what we did in the storage bind mounting section.
Run: docker run --name Cont1 --rm -v /root/cont1:/usr/share/nginx/html:Z -p 80:80 nginx
Command Breakdown:
Notice the added on switch -p:
-p Port bind mounting. The first 80 is the docker host port (where the service will be accessible) the second 80 is the container's available port.  We're mapping port 80 from the container to the host to open the service up from outside the docker host.

Now to check that you can access what the container is serving, visit your docker host's IP on port 80.
ex: 192.168.122.5 in a web browser will work just fine.  If foloowing this example, make sure you have some content (index.html) in /root/cont1 on your docker host.

Thursday, October 29, 2015

Real-world Docker Series: Installing Docker

Since this is intended to be a real-world Docker series, we're focusing on Red Hat Enterprise Linux 7.  CentOS7 will be very similar, but make sure you understand the differences between the OSes.

The first thing to do is to make sure you have the extras repository enabled.
yum repolist enabled|grep server-extras
!rhel-7-server-extras-rpms/x86_64                    Red Hat Enterprise L   112

If you don't see it listed, run:
subscription-manager repos --enable=rhel-7-server-extras-rpms

Run the first command again, to ensure the repository is now enabled.


Installing Docker:
sudo yum install docker -y

This will install docker and its dependencies, which should include the following:
docker.x86_64                    1.7.1-115.el7           @rhel-7-server-extras-rpms
docker-selinux.x86_64            1.7.1-115.el7           @rhel-7-server-extras-rpms
docker-logrotate.x86_64          1.7.1-115.el7           rhel-7-server-extras-rpms
docker-python.x86_64             1.4.0-115.el7           rhel-7-server-extras-rpms
docker-registry.noarch           0.6.8-8.el7             rhel-7-server-extras-rpms
docker-registry.x86_64           0.9.1-7.el7             rhel-7-server-extras-rpms

Next: Configuring Docker Storage

Real-world Docker Series: Tagging Images

Once a container has been loaded (or pulled) you can tag it to make it more useful to your project. A tag is very similar to a repository tag when working with a version control system. You'll want a useful tag name to be able to manage your containers easily later.

Run 'docker images'
Notice the container id listed. We'll need that to tag the container.

Run 'docker tag <image id> <your tag name>'
Verify your tag by running 'docker images' again.  It will now appear with the tag you provided.

A single docker container image can be used by many running containers. This is done by making use of the –name <desired running name> flag during a 'docker run'.  We'll go over this in detail later.

Next: Bind Mounting Persistent Storage & Ports

Real-world Docker Series: Loading Pre-Built Container Images

When provided with a pre-built container (probably from a developer) outside of a docker registry, you can use the docker load command to import the container.

Run:

 docker load -i <container package>.tar

Verify the container image was loaded:
docker images

You will see a container listed with an id only.

You can also check out hub.docker.com to 'docker pull' pre-built images.
After pulling a docker image, you will also see them listed with 'docker images'.

Next: Tagging Container Images

Real-world Docker Series: Intro

If you found your way to my little blog, then you have probably already heard of Docker, the container engine.  I've been doing a great deal of work surrounding Docker in my profession, and this is all so new, it's hard to get your finger on the pulse of where to get started, and where to go next.  This is an attempt to take you from start to finish on setting up a real-world Docker environment.

With this series of posts, I plan to focus on the administration side of things, mainly focusing on configuration and best-practices surrounding Docker.  We'll cover more than the basics found at the Docker Getting Started Page, and provide real-world examples of using containers.  For now we're focusing on just Docker, and as time passes I'll put together some posts on using Kubernetes to control and scale clustered container environments.

Next: Installing Docker

Real-world Docker Series: Configuring Docker Storage

Once the docker package and dependencies have been installed, you will want to configure storage for your containers.

Docker storage is intended to store your container's images.  When you run 'docker pull <container name>' docker will store the data in this space.  Containers are typically very small, and usually about 300-600MB.

Docker, by default, utilizes loopback storage devices. These create a virtual device at /var/lib/docker/devicemapper and use local storage. Due to performance degradation with loopback devices, the recommended method of container storage is to utilize the docker thin pool. Take a look at the contents of /etc/sysconfig/docker-storage.  This will change after configuring the docker-pool.

The docker thin pool can make use of any block device and create a thin logical volume, and can be configured to automatically grow when new space is added to its volume group (default.)  If you have worked with VMWare ESX you'll get the idea of how a thin volume works.

To configure the docker thin pool, we will use the docker-storage-setup file. First ensure you have a new block device (disk) added to the docker host. When added, get the device name by doing a 'fdisk -l' and identify the appropriate device.

DO NOT START THE DOCKER DAEMON YET

Replacing /dev/sdX with your block device found from 'fdisk -l':
sudo vi /etc/sysconfig/docker-storage-setup
Add the following:
DEVS=”/dev/sdX” #can be a comma separated list (replace /dev/sdX with the device identified with fdisk -l)
VG=vg_docker #volume group name that docker will generate for the docker-pool
Save the file.

The docker-storage-setup script will automatically generate the appropriate thin logical volumes and volume group when the docker daemon starts, or by running docker-storage-setup manually. ****NOTE: There's currently a bug with docker-storage-setup, and the block device must be 8GB or greater.

Start the docker daemon:
sudo systemctl start docker

If you'd like, you can now enable the docker service to start at boot:
sudo systemctl enable docker

Due to the docker-pool being a LVM thin volume, you will not see the volume when running 'df -h'.  To verify the volume has been configured:

Run 'lvs'
Verify there is a new Logical Volume with the name docker-pool.
Run 'vgdisplay'

Verify there is a new Volume Group with the name you specified in docker-storage-setup (vg_docker from above.)
Run 'docker info'
This will display detailed info on how the block device was broken up between metadata and data volumes, and show you the available data storage.  To learn more about thin Logical Volumes, run 'man lvmthin'.

Once you have confirmed that everything looks good, you should remove the /etc/sysconfig/docker-storage-setup file. If you don't, I've seen the docker service not start in some instances, and upon examining your logs, you will find that it is due to existing partitions on the specified block device from the /etc/sysconfig/docker-storage-setup file.


 Take a look at /etc/sysconfig/docker-storage. You will see that this was automatically configured to utilize the new docker pool.

Next: Loading Pre-built Docker Container Images