No Internet access from your Docker container? Check this out!

After a recent update I started having issues with my containers that hosted apps which were accessing the outside world periodically. At first I couldn’t understand where the problem is. I’ve been checking a lot Github issues without success and finally I remembered that I’ve made an update to my host which updated the kernel.

What I needed now is to find a way to boot with an older kernel selected by default. I started looking in Stackoverflow pages and blogs and found a way to set any kernel that I already have installed as the default one.

Here are the steps I took to fix my problem(my host is running Ubuntu linux):

  • check the /boot/grub/grub.cfg file for all available options you have. The name of every kernel installed is there. Pick one that is older than the latest installed. You have to look for entries like: “menuentry ‘Ubuntu, with Linux 3.13.0-113-generic'”. You will use the “Ubuntu, with Linux 3.13.0-113-generic” part
  • next open the /etc/default/grub file and look for the GRUB_DEFAULT entry
  • combine your kernel config name from point one from above with “Advanced options for Ubuntu” or “Previous Linux versions” for older Ubuntu versions (<14.04). You’ll have to have a similar string like this one now: “Previous Linux versions>Ubuntu, with Linux 3.13.0-92-generic”. Set GRUB_DEFAULT to this string like: GRUB_DEFAULT=Previous Linux versions>Ubuntu, with Linux 3.13.0-92-generic
  • you can use numeric values for GRUB_DEFAULT but it’s not recommended as the number will point to a random kernel config after the next update
  • save, run sudo update-grub and restart

Now you should have an Internet access from inside the Docker containers. This is just a temporary solution so don’t leave it like this, especially running with an older kernel. Better check you configuration and even reinstall everything on your host, use a newer linux distro, etc.

Source

Docker 1.9.0 and the new network configuration

Docker 1.9 is here and it introduces a new way to handle the networking between containers.
Because docker containers should live a short live before being replaced with their new versions one should ask himself do we really need the static IP that was existing until now and assigned to the Docker’s bridge? The usual IP you will see was 172.17.42.1.

After upgrading this IP will be gone. Instead other IPs will be created dynamically. Of course there will be another IP like 172.17.0.1 assigned to the docker0 bridge. You can use it if you are brave enough but better not.
However if you need a quick fix before going to bed you can use the Docker’s –bip parameter to set the bridge IP back to 172.17.42.1.

Another way is to go back to version 1.7.1 using your OS’s package manager or direct install/compile.
Later when you decide to start using Docker’s networking the right and better way, you can start from here.

Put your GitLab on HTTPS

For this article I will use the following configuration:

GitLab Docker image by sameersbn – Since the rise of Docker LXC I prefer to encapsulate all of my apps in production with it.

Docker is really easy to install. Just follow the instructions from here: https://docs.docker.com/installation or directly for Ubuntu users here: https://docs.docker.com/installation/ubuntulinux/.

Then follow the installation instructions of the Docker image here: https://github.com/sameersbn/docker-gitlab#installation.

Of course you can just install GitLab for you OS using their instructions here: https://about.gitlab.com/installation/.

My specific configuration is to put the Docker GitLab container behind a main nginx which becomes a reverse proxy for the container. Only the GitLab SSH console’s port is mapped directly to the host.

You may decide to map the GitLab ports directly to the host’s external IP and use it that way. The instructions on the Docker GitLab image are given with this in mind.

Now let’s say you have a running GitLab server behind or not an nginx reverse proxy. It’s working with ssh:// and http:// access for git clone/pull/push. You are getting used to it and now you want to make it the professional way: using HTTPS!

The first thing is to consider where will you get an SSL certificate from?

There are many companies out there selling good certificates for web sites, mail servers, etc. Some of them are expensive, others are very cheap. It depends on that whether you need to use it for multiple subdomains, will it be recognizable by all browsers and other software and so on. The third type are free. They are limited but they are recognized by the browsers and in our case by git/curl/OS too. Since they are free we don’t need to create self signed certificates and force everybody using our site to install certificates locally or to accept the warnings.

My personal choice is StartSSL. They give you a free certificate, it’s CA authorized and it’s working with a subdomain. I haven’t checked yet StartSSL’s other alternatives which are paid and also other providers of free certificates but for our case we only need the certificate for one thing and one place. In my case: https://gitlab.iliyan-trifonov.com and git clone https://gitlab.iliyan-trifonov.com/..project-name.git.

Now go to the free certificate page on StartSSL and sign up. Create a backup of the certificate you will install on your browser because without it you will have to pay to recover your account access. When you’re ready and you’re in the Control Panel of the site, authorize your email and domain. Also for a subdomain I picked gitlab as in gitlab.iliyan-trifonov.com.

From StartSSL you will get a .key file and a .crt file. Back them up.

There is a little more work to make the certificate compatible with git. Before that you can already use the key and crt files in your web server and make your site using https. But this is not enough for us so let’s continue.

First decrypt the key file using the following command and the password you provided while generating it on StartSSL:

openssl rsa -in ssl.key -out ssl-decrypted.key

This will prevent your web server to ask you for the password every time it is started. Imagine the downtime if the server was restarted automatically and it is waiting for a human to continue its work.

We need 2 more files freely available from StartSSL to make our certificate combined with their Root CA and Intermediate CA:

wget http://www.startssl.com/certs/ca-sha2.pem
wget http://www.startssl.com/certs/sub.class1.server.sha2.ca.pem

Combine the 3 certificate files you have:

cat ssl.crt sub.class1.server.sha2.ca.pem ca-sha2.pem > ssl-unified.crt

There is a possibility that the concatenated crt file will have some BEGIN/END lines on the same line. If your web server is not happy with that and says bad end of file in its logs, do this: open ssl-unified.crt with a text editor like nano and search for a line like this:

-----END CERTIFICATE----------BEGIN CERTIFICATE-----

or

-----BEGIN CERTIFICATE----------END CERTIFICATE-----

Make sure BEGIN and END are on a separate files and also that the B of the BEGIN and E of the END are on the same column, like this:

-----BEGIN CERTIFICATE-----
-----END CERTIFICATE-----

Not like this or similar:

-----BEGIN CERTIFICATE----
------END CERTIFICATE-----

Dashes on the left/right should be equal.

Now the only 2 files you need to use in your web server are ssl-unified.crt and ssl-decrypted.key.

It’s time to go to Nginx and make the configuration:

server {
    server_name gitlab.yourdomain.com;
    listen 80;
    
    return 301 https://$server_name$request_uri;

    access_log /var/log/gitlab.access.log;
    error_log /var/log/gitlab.error.log;
}

server {
    server_name gitlab.yourdomain.com;
    listen 443 ssl;

    ssl_certificate /.../certs/ssl-unified.crt;
    ssl_certificate_key /.../certs/ssl-decrypted.key;

    access_log /var/log/gitlab.ssl.access.log;
    error_log /var/log/gitlab.ssl.error.log;

    location / {
        proxy_set_header X-Real-IP $remote_addr;
         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
         proxy_set_header Host $http_host;
         proxy_set_header X-Forwarded-Proto $scheme;
         proxy_redirect off;
         proxy_read_timeout 300;
         proxy_pass http://INTERNAL_IP:INTERNAL_PORT; #the docker gitlab container's address
     }
}

Remember that this example Nginx configuration is for Nginx running as a reverse proxy to the container. In your case you may have decided to not have a reverse proxy and then you have to use the really easy configuration of the Dockerized GitLab from the link above or change the GitLab’s server configuration in a similar way. Still the preparation of the good .crt and .key files and the server configuration will give you a good start.

For better security read Optimizing HTTPS on NginxConfiguring HTTPS servers and Enabling Perfect Forward Secrecy.

At the end you will have your GitLab pages loaded with HTTPS and everybody can git clone https://yourgitlabserver.com/..project.git like a pro!

How to temporarily fix the message “TERM environment variable not set” while working with a Docker container

Sometimes at least these days with the current version of Docker you may stumble upon the message “TERM environment variable not set”.

This happened to me while being inside a container with “docker exec -ti”.

clear, top, etc. were not working and complaining about the not set variable.

If you execute this while inside the container:

export TERM=xterm

or

export TERM=${TERM:-dumb}

the warnings will stop and you can have the real thing.

Setup your time on the host and inside the Docker containers

If you want to set the server’s timezone and keep it syncronized and you are using Ubuntu like me you can do the following:

On the host you can change the timezone and configure a cron to syncronize it:

dpkg-reconfigure tzdata

Pick your continent and capitol and save;

Then setup a cron to update the time daily:

echo "ntpdate ntp.ubuntu.com" > /etc/cron.daily/ntpdate && chmod 755 /etc/cron.daily/ntpdate

That’s it!

Now if you want to change the timezone inside a Docker container:

I usually have sshd running inside the containers which allows me to ssh inside them and make changes as well as reload the server process running inside to activate the changes:

docker inspect mysql_container

Now you see the ip of the container, for example: 172.17.0.2 and you use it here:

ssh 172.17.0.2

Update: now you can use docker exec to enter a container environment. Forget about running sshd server in docker containers:

docker exec -ti mycontainername bash
export TERM=xterm
#do your stuff here

Inside the container:

dpkg-reconfigure tzdata

You are not allowed to use ntpdate inside the container but it probably gets the time from the host(I still have some things to learn) and you only need the timezone set.

Another(prefered) way to make changes inside the container and keep them is to run another container from the image that the one you are going to change is running from.

docker run -ti --name mysql_container_tmp iliyan/mysql bash

Then make your changes, exit and commit the new container over the same image. Stop and remove the original container as well as the _tmp one and run a new container from the updated image. Now your changes are there!

Check this Dockerfile that automates the process (the part on the bottom that works with timezones) and also this bash script.

How to work with Docker containers: Manually

I usually start with Ubuntu 13.10 and Bash (my repos):

(I know, I’ll have to change 13.10 to 14.04 soon and should use an LTS instead but I like to use the latest and it’s also so easy to rebuild the continers: as easy as changing one line on the top of the Docker file.)

docker run -name mynewcont ubuntu:13.10 bash

If you want to use the container immediatelly, use some additional params like -p and -v too:

docker run -name mynewcont -v /var/www:/var/www -p 127.0.0.1:8080:80 ubuntu:13.10 bash

There may be some very short time to wait for the image to be downloaded from the Docker’s index.

And now you are inside Ubuntu 13.10 no matter what is running on your host machine.

First thing I like to do is helping df -h and others work:

cat /proc/mounts > /etc/mtab

The next important thing is setting up the mirror system for apt so no matter where your container is started it will fetch the needed packages from the fastest/nearest mirror:

echo "deb mirror://mirrors.ubuntu.com/mirrors.txt saucy main restricted universe multiverse" > /etc/apt/sources.list \
&& echo "deb mirror://mirrors.ubuntu.com/mirrors.txt saucy-updates main restricted universe multiverse" >> /etc/apt/sources.list \
&& echo "deb mirror://mirrors.ubuntu.com/mirrors.txt saucy-backports main restricted universe multiverse" >> /etc/apt/sources.list \
&& echo "deb mirror://mirrors.ubuntu.com/mirrors.txt saucy-security main restricted universe multiverse" >> /etc/apt/sources.list

Finally we can update the sources and start installing stuff:

apt-get update && apt-get install -qq nginx-full php5-fpm mysql-server .... etc.

I really like installing and configuring LEMP stacks just for the practice of installing them and seeing the final result 🙂 This will probably convert to: any kind of ruby apps installations in the near future.

After you have installed and configured your server(s) you can just leave the container with them running inside by using the key combo: Ctrl-p + Ctrl+q  – just check if the container is still running after that with: docker ps.

You’ve just built a container like a new virtual machine box: started it clean, installed and ran some software inside, redirected a port, used a shared folder and you’re ready 🙂

What will happen if you restart Docker or the host Docker is running on?

The host will restart, Docker will be started usually automatically and the previously running containers will be started again with the command used to start them in the first place. In our situation this will start our container with bash and that’s it. You’ll have to use some commands to run your server(s) inside the container again:

docker attach mynewcont
/etc/init.d/nginx start (just an example)
( + Ctrl-p + Ctrl-q to detach )

There’s a really nice way to work with that autostart problem and it’s like how the things work here: using Cmd. First, let’s create a new file in the roor of the container’s file system (nginx is just an example):

docker attach mynewcont

Because this example uses nginx and docker needs the last command to be a server started and running in the foreground we need to tell nginx to not daemonize:

NGINXCONFFILE=/etc/nginx/nginx.conf && echo "daemon off;" | cat - $NGINXCONFFILE > $NGINXCONFFILE.tmp && mv $NGINXCONFFILE.tmp $NGINXCONFFILE

Or just put: “daemon off;” at the top of the nginx.conf file with nano or your favorite editor.

Install nano and create a new shell script file:

apt-get install -qq nano
nano /nginx.sh

Enter the follwing inside nginx.sh:

#!/bin/bash
#here you may put other things you want to do before the server is started
/etc/init.d/nginx

Ctrl-p + Ctrl-q to detach from the container.

Outside of the container, let’s commit it into a new image:

docker commit -run='{"Cmd":["sh","/nginx.sh"]}' mynewcont iliyan/mynewimg

You can also set the executed command in Cmd to be directly starting the server inside:

docker commit -run='{"Cmd":["/etc/init.d/nginx"]}' mynewcont iliyan/mynewimg

I prefer to always have a way to put additional commands and so I prefer the sh shellfile.sh way:

"Cmd":["sh","/nginx.sh"]

After the commit all changes inside the container are put into the image and will be visible when you run a new container from that image.

Now you have the image which will use to run a new container from. But before that stop/kill and rm the currently running container:

docker stop mynewcont
#or
docker kill mynewcont

And then:

docker rm mynewcont

I just want to create a new container with the same name “mynewcont” and also don’t want the first container which we’re not gonna use anymore.

Now use the image to create and run the container:

docker run -name mynewcont -v /var/www:/var/www -p 127.0.0.1:8080:80 iliyan/mynewimg

You’ll now have nginx working on localhost:8080 and serving files from the local /var/www path.

Now you see you have to change something inside the container and keep the changes. Let’s make it that way:

Keep the currently running cotainer intact.

Create a new container from the same image the running container is created from. Just don’t use the same volumes and ports, actually don’t use any extra params:

docker run -name mynewcont_tmp iliyan/mynewimg bash

You can also use:

docker run -rm ....

to destroy the container after it’s stopped and will not need to “docker rm …” it later.

Now you may want to add new vhosts in /etc/nginx/sites-available or change the default configuration. Use echo or nginx to create new file/change existing ones.

After the changes are done, detach from the container (Ctrl-p + Ctrl-q) and commit the tmp container over our image with a new tag (tags help starting a container from different configurations made on that image):

docker commit -run='{"Cmd":["sh","/nginx.sh"]}' mynewcont_tmp iliyan/mynewimg:addedmydomain

The tag “addedmydomain” is an example one, you can write anything there to describe tha latest changes. You can later commit with the same tag and different tags using the same image and will have them all listed with:

docker images

You can also see the -run=” param is always used to tell the image what to do. I’ve tested some inheritance without success but after docker updates or my skills improve we may not need to repeat the param.

Now make the procedure with stop/kill and rm of the currently running container:

docker stop mynewcont mynewcont_tmp
docker rm mynewcont mynewcont_tmp

Run the container with the change we made just now:

docker run -name mynewcont -v /var/www:/var/www -p 127.0.0.1:8080:80 iliyan/mynewimg:addedmydomain

That’s it!

To conclude:

1. We’ve made a container from Ubuntu:13.10 and started Bash inside, installed new packages, ran servers, etc.

2. We attached and detached to/from the container often to make some configuration changes until we were satisfied.

3. Then we committed the container into an image and used that image to run the container again but with the latest changes inside. Before running the same image/container we made sure the currently running one is stopped and removed(remove only if you don’t need to go back to the container for some reason and you want to use the same name to run a new one on its place).