Strange hang-ups with Docker

Today I had to change some buffer setting for MySQL in my.cnf.

So I set the new settings and tried /etc/init.d/mysql reload.

This passed successfully but when I went to the running mysql to check the settings they weren’t changed.

So I decided to make /etc/init.d/mysql restart.

Big mistake because the mysqld process was keeping the container from closing. I suspect it stopped mysql and then the container stopped too, there was only one additional sshd process running which I used to connect to the container.

However the DB is in good condition, but what happened after the mysql container closed is bothering me. That’s why they say don’t use it in production! 😉

I’ve executed docker start mysql_container

It started, atop showed me a huge hdd usage for a second and the port was open but I couldn’t connect with telnet or the mysql client. I’ve made one docker commit –run=” mysql_container iliyan/mysql and kill/rm of the current one. Checked the mysql port – it was closed, nothing hanged there. Then ran the iliyan/mysql with bash, started the mysql server from there it started successfully, connecting with the mysql client locally to the server succeeded and then I realized it’s something controlled by Docker that hanged.

So I decided to restart the docker process which restart all of the containers (service docker restart). It restarted, started 3 containers from 5 and I ha to just docker start cont4 cont5 to finish the process.

Now everything is working well again but this one have to be monitored. Who know if this is not going to happen again or every time a long running container is restarted.

Update: Some time passed and the things were mostly good. However from yesterday I wonder why the hell the page visits lowered to one of my site and decided to check the firewall:

iptables -L

What I saw from the first look is that I have too much repeating IPs of the containers. I also remember that the last time I restarted the mysql percona container (I keep a big DB inside, no volumes, so this will change soon) the percona and git/gitweb containers were started but not reachable, I couldn’t start them again, restart works but no ports were set, no IP seen in `docker ps`, etc. Then I’ve made `service docker restart` again and the contiatiners were working but I better check the firewall and the system as a whole when changing anything in the containers, also `docker ps -a`, `docker top cont_name` and `docker logs cont_name` and rebooting the whole host server to fix it if not possible either way. I still don’t know a lot of things about system administration, still I think this problem comes from 1-2 containers that have an exotic way of installation/configuration inside. I will definitely check them one by one and probably when docker 1.0 is here I will have fixed my problem as well! 🙂

Update2:

After I changed the process holding the container, using mostly the `tail -f filename.log` command, the logs produced for `docker log containername` are coming more often and after a few days I still have good results for `docker top containername` on the troubled containers which are more than fine now. Probably if there is not a log for a long time the container will show you nothing when executing `docker top` on it. We’ll see.

Let’s be patient and continue testing this great tool!

Linux Foundation to Build Massive Open Online Course Program with edX, Increase Access to Linux Training for All | The Linux Foundation

Linux Foundation to Build Massive Open Online Course Program with edX, Increase Access to Linux Training for All | The Linux Foundation.

I just registered at edX and I am trying the demo course while waiting for the Linux one. Let’s see what it is all about 🙂

How to increase the timeouts when using Nginx, PHP-FPM and phpMyAdmin

Today I needed to change some tables’ structures using phpMyAdmin.

To my surprise not long after I clicked Go to execute the queries, many errors started comming from nginx and php.

So I started looking around about any timeout settings I can use in nginx and php5-fpm and the final working result is this (I choose a timeout of 600 seconds):

/etc/php5/fpm/php.ini

max_execution_time = 600

/etc/php5/fpm/pool.d/www.conf

request_terminate_timeout = 600

in nginx conf of my site inside the *.php location settings:

fastcgi_read_timeout 600;

reload/restart your php and nginx servers for the new changes to take effect and you are ready. Test it by executing some queries that you know will take more time than usual.

Here’s a good source and more settings and information.

If you’re going to change the nginx and php settings of the servers running inside Docker containers you better use this syntax:

servername reload

instead of:

servername restart

Or your container may be automatically stopped after the restart is executes as the restart command is just short for: stop->start.

Cloudflare:

if you’re using phpMyadmin behind Cloudflare, check their thoughts about that: https://support.cloudflare.com/hc/en-us/articles/200171926-Error-524

How to work with Docker containers: Manually

I usually start with Ubuntu 13.10 and Bash (my repos):

(I know, I’ll have to change 13.10 to 14.04 soon and should use an LTS instead but I like to use the latest and it’s also so easy to rebuild the continers: as easy as changing one line on the top of the Docker file.)

docker run -name mynewcont ubuntu:13.10 bash

If you want to use the container immediatelly, use some additional params like -p and -v too:

docker run -name mynewcont -v /var/www:/var/www -p 127.0.0.1:8080:80 ubuntu:13.10 bash

There may be some very short time to wait for the image to be downloaded from the Docker’s index.

And now you are inside Ubuntu 13.10 no matter what is running on your host machine.

First thing I like to do is helping df -h and others work:

cat /proc/mounts > /etc/mtab

The next important thing is setting up the mirror system for apt so no matter where your container is started it will fetch the needed packages from the fastest/nearest mirror:

echo "deb mirror://mirrors.ubuntu.com/mirrors.txt saucy main restricted universe multiverse" > /etc/apt/sources.list \
&& echo "deb mirror://mirrors.ubuntu.com/mirrors.txt saucy-updates main restricted universe multiverse" >> /etc/apt/sources.list \
&& echo "deb mirror://mirrors.ubuntu.com/mirrors.txt saucy-backports main restricted universe multiverse" >> /etc/apt/sources.list \
&& echo "deb mirror://mirrors.ubuntu.com/mirrors.txt saucy-security main restricted universe multiverse" >> /etc/apt/sources.list

Finally we can update the sources and start installing stuff:

apt-get update && apt-get install -qq nginx-full php5-fpm mysql-server .... etc.

I really like installing and configuring LEMP stacks just for the practice of installing them and seeing the final result 🙂 This will probably convert to: any kind of ruby apps installations in the near future.

After you have installed and configured your server(s) you can just leave the container with them running inside by using the key combo: Ctrl-p + Ctrl+q  – just check if the container is still running after that with: docker ps.

You’ve just built a container like a new virtual machine box: started it clean, installed and ran some software inside, redirected a port, used a shared folder and you’re ready 🙂

What will happen if you restart Docker or the host Docker is running on?

The host will restart, Docker will be started usually automatically and the previously running containers will be started again with the command used to start them in the first place. In our situation this will start our container with bash and that’s it. You’ll have to use some commands to run your server(s) inside the container again:

docker attach mynewcont
/etc/init.d/nginx start (just an example)
( + Ctrl-p + Ctrl-q to detach )

There’s a really nice way to work with that autostart problem and it’s like how the things work here: using Cmd. First, let’s create a new file in the roor of the container’s file system (nginx is just an example):

docker attach mynewcont

Because this example uses nginx and docker needs the last command to be a server started and running in the foreground we need to tell nginx to not daemonize:

NGINXCONFFILE=/etc/nginx/nginx.conf && echo "daemon off;" | cat - $NGINXCONFFILE > $NGINXCONFFILE.tmp && mv $NGINXCONFFILE.tmp $NGINXCONFFILE

Or just put: “daemon off;” at the top of the nginx.conf file with nano or your favorite editor.

Install nano and create a new shell script file:

apt-get install -qq nano
nano /nginx.sh

Enter the follwing inside nginx.sh:

#!/bin/bash
#here you may put other things you want to do before the server is started
/etc/init.d/nginx

Ctrl-p + Ctrl-q to detach from the container.

Outside of the container, let’s commit it into a new image:

docker commit -run='{"Cmd":["sh","/nginx.sh"]}' mynewcont iliyan/mynewimg

You can also set the executed command in Cmd to be directly starting the server inside:

docker commit -run='{"Cmd":["/etc/init.d/nginx"]}' mynewcont iliyan/mynewimg

I prefer to always have a way to put additional commands and so I prefer the sh shellfile.sh way:

"Cmd":["sh","/nginx.sh"]

After the commit all changes inside the container are put into the image and will be visible when you run a new container from that image.

Now you have the image which will use to run a new container from. But before that stop/kill and rm the currently running container:

docker stop mynewcont
#or
docker kill mynewcont

And then:

docker rm mynewcont

I just want to create a new container with the same name “mynewcont” and also don’t want the first container which we’re not gonna use anymore.

Now use the image to create and run the container:

docker run -name mynewcont -v /var/www:/var/www -p 127.0.0.1:8080:80 iliyan/mynewimg

You’ll now have nginx working on localhost:8080 and serving files from the local /var/www path.

Now you see you have to change something inside the container and keep the changes. Let’s make it that way:

Keep the currently running cotainer intact.

Create a new container from the same image the running container is created from. Just don’t use the same volumes and ports, actually don’t use any extra params:

docker run -name mynewcont_tmp iliyan/mynewimg bash

You can also use:

docker run -rm ....

to destroy the container after it’s stopped and will not need to “docker rm …” it later.

Now you may want to add new vhosts in /etc/nginx/sites-available or change the default configuration. Use echo or nginx to create new file/change existing ones.

After the changes are done, detach from the container (Ctrl-p + Ctrl-q) and commit the tmp container over our image with a new tag (tags help starting a container from different configurations made on that image):

docker commit -run='{"Cmd":["sh","/nginx.sh"]}' mynewcont_tmp iliyan/mynewimg:addedmydomain

The tag “addedmydomain” is an example one, you can write anything there to describe tha latest changes. You can later commit with the same tag and different tags using the same image and will have them all listed with:

docker images

You can also see the -run=” param is always used to tell the image what to do. I’ve tested some inheritance without success but after docker updates or my skills improve we may not need to repeat the param.

Now make the procedure with stop/kill and rm of the currently running container:

docker stop mynewcont mynewcont_tmp
docker rm mynewcont mynewcont_tmp

Run the container with the change we made just now:

docker run -name mynewcont -v /var/www:/var/www -p 127.0.0.1:8080:80 iliyan/mynewimg:addedmydomain

That’s it!

To conclude:

1. We’ve made a container from Ubuntu:13.10 and started Bash inside, installed new packages, ran servers, etc.

2. We attached and detached to/from the container often to make some configuration changes until we were satisfied.

3. Then we committed the container into an image and used that image to run the container again but with the latest changes inside. Before running the same image/container we made sure the currently running one is stopped and removed(remove only if you don’t need to go back to the container for some reason and you want to use the same name to run a new one on its place).

Docker, Redmine, Attack in Isolation

I’ve found a very good article about what Docker is, how to configure it, create a container, use it to install Redmine and change its configuration and then attack the application while running isolated inside a container on your host.

The article can be read here: http://resources.infosecinstitute.com/securing-cloud-based-applications-docker/

That reminds me I have to cut my redmine app from the host and put it inside a container. I will be using thin with one worker set and will expect a webserver like nginx to use it as an upstream. But more about this in a future article.