Powered By Blogger

Wednesday, November 4, 2015

Run a RHEL7 docker registry in a container

I've been doing some testing and research surrounding the RHEL7 docker-registry service, a private Docker registry server, and decided that it would be a good experiment to run it as a container.

To accomplish this, I had to determine how the docker-registry package and service run, and how they are configured.

So on my RHEL7 host I started by installing the docker-registry package from the extras repo:
yum install -y docker-registry

I took a look inside the systemd service unit for docker-registry, which was located at /usr/lib/systemd/system/docker-registry.service, since all the requirements for starting the process would be held here.

Contents of /usr/lib/systemd/system/docker-registry.service:
[Unit]
Description=Registry server for Docker

[Service]
Type=simple
Environment=DOCKER_REGISTRY_CONFIG=/etc/docker-registry.yml
EnvironmentFile=-/etc/sysconfig/docker-registry
WorkingDirectory=/usr/lib/python2.7/site-packages/docker-registry
ExecStart=/usr/bin/gunicorn --access-logfile - --max-requests 100 --graceful-timeout 3600 -t 3600 -k gevent -b ${REGISTRY_ADDRESS}:${REGISTRY_PORT} -w $GUNICORN_WORKERS docker_registry.wsgi:application
Restart=on-failure

[Install]
WantedBy=multi-user.target

As you can see, there's a yaml config file and an environment file (overriding some settings from the yaml.)
There's a working directory, and a startup command.

So once I realized the /etc/sysconfig/docker-registry environment file was overriding settings in the /etc/docker-registry.yml, it was easy to move this into a Dockerfile to build out a container.

Contents of /etc/sysconfig/docker-registry:
# The Docker registry configuration file
# DOCKER_REGISTRY_CONFIG=/etc/docker-registry.yml

# The configuration to use from DOCKER_REGISTRY_CONFIG file
SETTINGS_FLAVOR=local

# Address to bind the registry to
REGISTRY_ADDRESS=0.0.0.0

# Port to bind the registry to
REGISTRY_PORT=5000

# Number of workers to handle the connections
GUNICORN_WORKERS=8

As you'll see, I move the envrionment file settings into docker ENV lines.  I am also overriding where the docker-registry stores it's data, and making the docker registry searchable.  When I run the container I'll be bind mounting a path from my docker host to the docker-registry container.  When the container is shut off, my data will still be there, and it will also come back when I restart my docker-registry container.

I'm not running yum update -y, because there's been some failing dependencies on systemd-libs packages in the latest published containers.

Contents of mydocker-registry Dockerfile:
FROM rhel7:latest
RUN yum --enablerepo=rhel-7-server-extras-rpms install docker-registry -y && \
    yum clean all;
EXPOSE 5000
ENV DOCKER_REGISTRY_CONFIG /etc/docker-registry.yml
ENV SETTINGS_FLAVOR local
ENV REGISTRY_ADDRESS 0.0.0.0
ENV REGISTRY_PORT 5000
ENV GUNICORN_WORKERS 8
ENV SEARCH_BACKEND sqlalchemy
ENV STORAGE_PATH /mnt/registry
WORKDIR /usr/lib/python2.7/site-packages/docker-registry
CMD /usr/bin/gunicorn --access-logfile - --max-requests 100 --graceful-timeout 3600 -t 3600 -k gevent -b ${REGISTRY_ADDRESS}:${REGISTRY_PORT} -w $GUNICORN_WORKERS docker_registry.wsgi:application

Build it:
docker build -t mydocker-registry .

Run it:
docker run --name docker-registry --rm -v /mypath/docker-registry-storage:/mnt/registry:Z -p 5000:5000 mydocker-registry

Now you can tag and push to it.

Tag one of your other images:
Run: 'docker images'
Note one of the id's.

docker tag <container id> localhost:5000/mynewtag

Push it:
docker push localhost:5000/mynewtag

Search for it:
docker search localhost:5000/

This should return the image you just pushed into the registry.


This is an insecure docker-registry setup, and authentication is not configured.  To connect to this docker registry from another docker host to pull or push you must make a change on that system.

To push/pull/search from another docker host:
Edit /etc/sysconfig/docker
Uncomment the line:
INSECURE_REGISTRY='--insecure-registry'

Modify the line to contain:
INSECURE_REGISTRY='--insecure-registry <the ip/hostname where your docker-registry container runs>:5000'

Save it and run: 'systemctl restart docker'

Now you can do a 'docker pull <ip/hostname where your docker-registry container runs>:5000/mynewtag'

You can also take the container one step further and configure it as a systemd service unit.

Monday, November 2, 2015

Real-world Docker Series: Conclusion

I hope you've enjoyed all the shared knowledge on getting real-world docker hosts configured and understanding some of the considerations with systems administration and docker.  This series was intended to be a quick reference for some of the deeper concerns with bringing docker containers to a production environment, and a collection of knowledge that was at times hard to put together.

What I've laid out in this docker series would work in production for a small set of docker hosts and containers residing on each host.  Think of each docker host as VMWare ESX without vCenter.

If you need something more robust with orchestration, clustering, load-balancing, and scaling features, then you will need to start looking at kubernetes.  Kubernetes is a real-world answer to all of these problems.  Red Hat takes it even further with products like OpenShift v3 and upcoming Atomic Enterprise Platform.

Real-world Docker Series: Using selinux with docker

Docker can safely and easily run in conjunction with selinux.  To ensure you're setup for selinux support, check the following:
'getenforce' - Are you enforcing/permissive/disabled?  You should be enforcing.  if not, run 'setenforce 1'.
'yum list installed docker-selinux*' - If nothing returns, then you're missing the selinux components and need to install them.  Run 'yum install -y docker-selinux' to resolve.
'cat /etc/sysconfig/docker | grep OPTIONS' - You should see 'OPTIONS='--selinux-enabled''.  If not make the change and restart the docker daemon: 'systemctl restart docker'.

Docker works auto-magically with selinux to enhance your system's security.  The only things you need to do to properly work with the tool are to understand the switches involved with bind mounting storage.  Visit the Bind Mounting Storage & Ports and Working with NFS mounts articles if you haven't already to understand the caveats with selinux and docker storage.

Next: Conclusion

Real-world Docker Series: Run Dedicated Containers as systemd services

Since running docker run commands to start up many containers when a docker host comes online would become impractical and extremely tedious, we can move these commands into systemd service unit files.

To create your own systemd service unit files, you must store the files in /etc/systemd/system.
It's a good practice to name them by prefixing the service name with docker to easily identify what they are later.
Ex: /etc/systemd/system/docker.<containername>.service

Here is an example of a systemd service for a container:
[Unit]
Description=Nginx Container 1
After=docker.service
Requires=docker.service

[Service]
TimeoutStartSec=0
Restart=always
ExecStartPre=-/usr/bin/docker stop Cont1
ExecStartPre=-/usr/bin/docker rm Cont1
ExecStart=/usr/bin/docker run --name Cont1 --rm -v /root/cont1:/usr/share/nginx/html:Z -p 80:80 nginx

[Install]
WantedBy=multi-user.target

If you're unfamiliar with systemd services, there's a few things to notice here.
After: This makes sure the service is started after the docker service.
Requires: This will not run if there is no docker service.
TimeoutStartSec: 0 implies we will not timeout when trying to start this service
Restart: Start the service's process (ExecStart) if it ends
ExecStartPre: You can have many “pre” commands. Notice the '-' at the beginning of the command. This tells the service that if this command fails, move on and do not fail at starting the service. This is important mainly to the restart aspect of this service. The first “pre” command ensures that the container named Cont1 is stopped. The second command ensures it's removed any possible orphaned containers named Cont1.
ExecStart: This is where your docker run command goes, and is the main process that systemd will be watching.
[Install] and WantedBy are replacements for the old chkconfig runlevels. Multi-user.target is the equivalent of run level 3. When we run systemctl enable docker.cont1.service, it will be installed (enabled) at run-level 3.

Caveats:
docker stop <container name> will stop the container temporarily, but systemd will automatically restart the container based on the service we created. Test by running docker stop <container name> then run docker ps. Notice the time the container has been up.

To truly stop the running container, do so by stopping the associated service you created for it.

 systemctl stop <docker.MyContainer'sService>


This is where you may want to introduce some of the power of cgroups (control groups.)  Cgroups allow the administrator to control how much of the host's resources are taken up by child processes.  You would want to add any restrictions for the container's in its service.  This is outside the scope of this series, but you should certainly check out Red Hat's documentation on the topic of cgroups as it pertains to RHEL7, as it has changed significantly with the implementation of systemd in RHEL7.

Next: How to use selinux with docker

Real-world Docker Series: NFS Tree/Hierarchy considerations

When building out the directory structure/tree design, it is important to do this in a sensible way.
Something like:

NFS1
|
|_HOST1
| |_Container1
| | |_webapp
| | |_logs
| |_Container2
| | |_webapp
| | |_logs
|_HOST2
| |_Container3
| | |_webapp
| | |_logs
| |_Container4
| | |_webapp
| | |_logs
|_HOST3
| |_Container5
| | |_webapp
| | |_logs
| |_Container6
| | |_webapp
| | |_logs
|_HOST4
| |_Container7
| | |_webapp
| | |_logs
| |_Container8
| | |_webapp
| | |_logs
|_SharedContent



In the tree design of this example, we're assuming each container will have a different code base for each container's running web application.  If they were all running the same application, you could for example, omit the webapp directory for each container, and mount up the same application with content placed in the SharedContent directory off the root of the NFS mount to the necessary web root of the container. 

Think this through when building out your environment.  Systems that perform orchestration are much better suited for what we're showing here, and you should definitely read about kubernetes.

Real-world Docker Series: Working with NFS Mounts

After seeing how to bind mount storage, you're probably wondering, “How can I store data from a container on a NFS mounted device?”

There are 2 ways to accomplish this properly with selinux:
1.) There is a selinux boolean: virt_sandbox_use_nfs
To check the status of this boolean, you can run:
getsebool virt_sandbox_use_nfs
If the status of the boolean is off, then you can turn it on by running:
setsebool -PV virt_sandbox_use_nfs on #Persistent and Verbose on Errors
Now run getsebool virt_sandbox_use_nfs again to verify it's now on.

When bind mounting storage on the NFS mount, you will now need to drop the :z and :Z options.

This now allows the containers to be able to access any of the docker host's mounted NFS volumes when directed to.
2.) Setting the appropriate selinux file context as a mount option. This is accomplished by adding the selinux context required for docker container data to the /etc/fstab NFS mount options.
vi /etc/fstab and find the appropriate NFS mount. Append to the entry's options: context=”system_u:object_r:svirt_sandbox_file_t” and save the fstab.
Unless you are running a NFS v4.2 server and NFS v4.2 client you will need to drop the :z and :Z options from your docker run command. NFS v4.2 supports contexts properly and can properly store the file contexts.

Method 2 is considered more secure, since you are allowing possible access to only a specified NFS volume rather than all of them as seen in method 1.



Real-world Docker Series: Bind Mounting Persistent Storage & Ports

Bind Mounting Storage:
Docker containers do not retain data once they've stopped running. If you want to keep any data generated by a container, you must bind storage from the docker host to the docker container.

To accomplish this, all you need is a valid path on the docker host. When running the container, you will specify where to bind this path.  If you are already familiar with virtualization technology, think of this as assigning a new virtual disk to a virtual machine, and then mounting it on a particular path.  With bind mounting, we're simply mounting a path on the host to a path on the container.

When using selinux, you must specify a :z option for a path shared by many containers (ex: web application's www root.) If the data is specific to a single container, (think log files in /var) you will use the :Z option. You can have multiple volume bind mounts.

Run:
docker run --name Cont1 --rm -v /root/cont1:/usr/share/nginx/html:Z nginx
Command Breakdown:
–name Cont1 is the unique run-time name we've assigned to (in this example) the nginx container (tag)
--rm Remove the container (after it's work is done i.e. process it performs ends)
-v Volume, /root/cont1 (path on docker host) /usr/share/nginx/html (where to mount on the container)

From this example you see that we bind mounted /root/cont1 to the Cont1 container with tag nginx.

The --volumes-from=<container id> (obtain container id from 'docker ps')  This will bind mount all the mounted volumes from a running container to another container.  You can specify :ro (read-only) or :rw (read-write) to override the container's mounting settings.  The default is to inherit the settings of the volumes from the container you're bind mounting from.

Bind Mounting Ports:
Docker containers ports are not automatically exposed.  Furthermore, docker containers run on a private docker network bridge.  Furthermore, based on the configuration of the docker container's process, you will need to map a docker host port to the container's port where the process is running.  On top of that, docker will automatically create iptables rules for you, based upon your port binding command.

Port binding is completed with the -p switch.  See your container's documentation to configure a particular service port.  In this example we're using the nginx container from hub.docker.com, and the default port on the container is 80.

We'll add on to what we did in the storage bind mounting section.
Run: docker run --name Cont1 --rm -v /root/cont1:/usr/share/nginx/html:Z -p 80:80 nginx
Command Breakdown:
Notice the added on switch -p:
-p Port bind mounting. The first 80 is the docker host port (where the service will be accessible) the second 80 is the container's available port.  We're mapping port 80 from the container to the host to open the service up from outside the docker host.

Now to check that you can access what the container is serving, visit your docker host's IP on port 80.
ex: 192.168.122.5 in a web browser will work just fine.  If foloowing this example, make sure you have some content (index.html) in /root/cont1 on your docker host.