Put your GitLab on HTTPS

For this article I will use the following configuration:

GitLab Docker image by sameersbn – Since the rise of Docker LXC I prefer to encapsulate all of my apps in production with it.

Docker is really easy to install. Just follow the instructions from here: https://docs.docker.com/installation or directly for Ubuntu users here: https://docs.docker.com/installation/ubuntulinux/.

Then follow the installation instructions of the Docker image here: https://github.com/sameersbn/docker-gitlab#installation.

Of course you can just install GitLab for you OS using their instructions here: https://about.gitlab.com/installation/.

My specific configuration is to put the Docker GitLab container behind a main nginx which becomes a reverse proxy for the container. Only the GitLab SSH console’s port is mapped directly to the host.

You may decide to map the GitLab ports directly to the host’s external IP and use it that way. The instructions on the Docker GitLab image are given with this in mind.

Now let’s say you have a running GitLab server behind or not an nginx reverse proxy. It’s working with ssh:// and http:// access for git clone/pull/push. You are getting used to it and now you want to make it the professional way: using HTTPS!

The first thing is to consider where will you get an SSL certificate from?

There are many companies out there selling good certificates for web sites, mail servers, etc. Some of them are expensive, others are very cheap. It depends on that whether you need to use it for multiple subdomains, will it be recognizable by all browsers and other software and so on. The third type are free. They are limited but they are recognized by the browsers and in our case by git/curl/OS too. Since they are free we don’t need to create self signed certificates and force everybody using our site to install certificates locally or to accept the warnings.

My personal choice is StartSSL. They give you a free certificate, it’s CA authorized and it’s working with a subdomain. I haven’t checked yet StartSSL’s other alternatives which are paid and also other providers of free certificates but for our case we only need the certificate for one thing and one place. In my case: https://gitlab.iliyan-trifonov.com and git clone https://gitlab.iliyan-trifonov.com/..project-name.git.

Now go to the free certificate page on StartSSL and sign up. Create a backup of the certificate you will install on your browser because without it you will have to pay to recover your account access. When you’re ready and you’re in the Control Panel of the site, authorize your email and domain. Also for a subdomain I picked gitlab as in gitlab.iliyan-trifonov.com.

From StartSSL you will get a .key file and a .crt file. Back them up.

There is a little more work to make the certificate compatible with git. Before that you can already use the key and crt files in your web server and make your site using https. But this is not enough for us so let’s continue.

First decrypt the key file using the following command and the password you provided while generating it on StartSSL:

openssl rsa -in ssl.key -out ssl-decrypted.key

This will prevent your web server to ask you for the password every time it is started. Imagine the downtime if the server was restarted automatically and it is waiting for a human to continue its work.

We need 2 more files freely available from StartSSL to make our certificate combined with their Root CA and Intermediate CA:

wget http://www.startssl.com/certs/ca-sha2.pem
wget http://www.startssl.com/certs/sub.class1.server.sha2.ca.pem

Combine the 3 certificate files you have:

cat ssl.crt sub.class1.server.sha2.ca.pem ca-sha2.pem > ssl-unified.crt

There is a possibility that the concatenated crt file will have some BEGIN/END lines on the same line. If your web server is not happy with that and says bad end of file in its logs, do this: open ssl-unified.crt with a text editor like nano and search for a line like this:




Make sure BEGIN and END are on a separate files and also that the B of the BEGIN and E of the END are on the same column, like this:


Not like this or similar:


Dashes on the left/right should be equal.

Now the only 2 files you need to use in your web server are ssl-unified.crt and ssl-decrypted.key.

It’s time to go to Nginx and make the configuration:

server {
    server_name gitlab.yourdomain.com;
    listen 80;
    return 301 https://$server_name$request_uri;

    access_log /var/log/gitlab.access.log;
    error_log /var/log/gitlab.error.log;

server {
    server_name gitlab.yourdomain.com;
    listen 443 ssl;

    ssl_certificate /.../certs/ssl-unified.crt;
    ssl_certificate_key /.../certs/ssl-decrypted.key;

    access_log /var/log/gitlab.ssl.access.log;
    error_log /var/log/gitlab.ssl.error.log;

    location / {
        proxy_set_header X-Real-IP $remote_addr;
         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
         proxy_set_header Host $http_host;
         proxy_set_header X-Forwarded-Proto $scheme;
         proxy_redirect off;
         proxy_read_timeout 300;
         proxy_pass http://INTERNAL_IP:INTERNAL_PORT; #the docker gitlab container's address

Remember that this example Nginx configuration is for Nginx running as a reverse proxy to the container. In your case you may have decided to not have a reverse proxy and then you have to use the really easy configuration of the Dockerized GitLab from the link above or change the GitLab’s server configuration in a similar way. Still the preparation of the good .crt and .key files and the server configuration will give you a good start.

For better security read Optimizing HTTPS on NginxConfiguring HTTPS servers and Enabling Perfect Forward Secrecy.

At the end you will have your GitLab pages loaded with HTTPS and everybody can git clone https://yourgitlabserver.com/..project.git like a pro!

Nginx and PHP-FPM status pages from subdomains

Here are two nginx configurations I am using to get the basic status of the two servers by loading different subdomains:

PHP-FPM status page from subdomain.yoursite.com:

server {
  listen 80;
  server_name subdomain.yoursite.com;

  location / {
    rewrite .* /fpm-status;

  location /fpm-status {
    auth_basic "Restricted";
    auth_basic_user_file /path/to/.htpasswd;
    include fastcgi_params;


Nginx status page from subdomain2.yoursite.com:

server {

  listen 80;
  server_name subdomain2.yoursite.com;

  location / {
    auth_basic "Restricted";
    auth_basic_user_file /path/to/.htpasswd;
    stub_status on;


How to increase the timeouts when using Nginx, PHP-FPM and phpMyAdmin

Today I needed to change some tables’ structures using phpMyAdmin.

To my surprise not long after I clicked Go to execute the queries, many errors started comming from nginx and php.

So I started looking around about any timeout settings I can use in nginx and php5-fpm and the final working result is this (I choose a timeout of 600 seconds):


max_execution_time = 600


request_terminate_timeout = 600

in nginx conf of my site inside the *.php location settings:

fastcgi_read_timeout 600;

reload/restart your php and nginx servers for the new changes to take effect and you are ready. Test it by executing some queries that you know will take more time than usual.

Here’s a good source and more settings and information.

If you’re going to change the nginx and php settings of the servers running inside Docker containers you better use this syntax:

servername reload

instead of:

servername restart

Or your container may be automatically stopped after the restart is executes as the restart command is just short for: stop->start.


if you’re using phpMyadmin behind Cloudflare, check their thoughts about that: https://support.cloudflare.com/hc/en-us/articles/200171926-Error-524

How to work with Docker containers: Manually

I usually start with Ubuntu 13.10 and Bash (my repos):

(I know, I’ll have to change 13.10 to 14.04 soon and should use an LTS instead but I like to use the latest and it’s also so easy to rebuild the continers: as easy as changing one line on the top of the Docker file.)

docker run -name mynewcont ubuntu:13.10 bash

If you want to use the container immediatelly, use some additional params like -p and -v too:

docker run -name mynewcont -v /var/www:/var/www -p ubuntu:13.10 bash

There may be some very short time to wait for the image to be downloaded from the Docker’s index.

And now you are inside Ubuntu 13.10 no matter what is running on your host machine.

First thing I like to do is helping df -h and others work:

cat /proc/mounts > /etc/mtab

The next important thing is setting up the mirror system for apt so no matter where your container is started it will fetch the needed packages from the fastest/nearest mirror:

echo "deb mirror://mirrors.ubuntu.com/mirrors.txt saucy main restricted universe multiverse" > /etc/apt/sources.list \
&& echo "deb mirror://mirrors.ubuntu.com/mirrors.txt saucy-updates main restricted universe multiverse" >> /etc/apt/sources.list \
&& echo "deb mirror://mirrors.ubuntu.com/mirrors.txt saucy-backports main restricted universe multiverse" >> /etc/apt/sources.list \
&& echo "deb mirror://mirrors.ubuntu.com/mirrors.txt saucy-security main restricted universe multiverse" >> /etc/apt/sources.list

Finally we can update the sources and start installing stuff:

apt-get update && apt-get install -qq nginx-full php5-fpm mysql-server .... etc.

I really like installing and configuring LEMP stacks just for the practice of installing them and seeing the final result 🙂 This will probably convert to: any kind of ruby apps installations in the near future.

After you have installed and configured your server(s) you can just leave the container with them running inside by using the key combo: Ctrl-p + Ctrl+q  – just check if the container is still running after that with: docker ps.

You’ve just built a container like a new virtual machine box: started it clean, installed and ran some software inside, redirected a port, used a shared folder and you’re ready 🙂

What will happen if you restart Docker or the host Docker is running on?

The host will restart, Docker will be started usually automatically and the previously running containers will be started again with the command used to start them in the first place. In our situation this will start our container with bash and that’s it. You’ll have to use some commands to run your server(s) inside the container again:

docker attach mynewcont
/etc/init.d/nginx start (just an example)
( + Ctrl-p + Ctrl-q to detach )

There’s a really nice way to work with that autostart problem and it’s like how the things work here: using Cmd. First, let’s create a new file in the roor of the container’s file system (nginx is just an example):

docker attach mynewcont

Because this example uses nginx and docker needs the last command to be a server started and running in the foreground we need to tell nginx to not daemonize:

NGINXCONFFILE=/etc/nginx/nginx.conf && echo "daemon off;" | cat - $NGINXCONFFILE > $NGINXCONFFILE.tmp && mv $NGINXCONFFILE.tmp $NGINXCONFFILE

Or just put: “daemon off;” at the top of the nginx.conf file with nano or your favorite editor.

Install nano and create a new shell script file:

apt-get install -qq nano
nano /nginx.sh

Enter the follwing inside nginx.sh:

#here you may put other things you want to do before the server is started

Ctrl-p + Ctrl-q to detach from the container.

Outside of the container, let’s commit it into a new image:

docker commit -run='{"Cmd":["sh","/nginx.sh"]}' mynewcont iliyan/mynewimg

You can also set the executed command in Cmd to be directly starting the server inside:

docker commit -run='{"Cmd":["/etc/init.d/nginx"]}' mynewcont iliyan/mynewimg

I prefer to always have a way to put additional commands and so I prefer the sh shellfile.sh way:


After the commit all changes inside the container are put into the image and will be visible when you run a new container from that image.

Now you have the image which will use to run a new container from. But before that stop/kill and rm the currently running container:

docker stop mynewcont
docker kill mynewcont

And then:

docker rm mynewcont

I just want to create a new container with the same name “mynewcont” and also don’t want the first container which we’re not gonna use anymore.

Now use the image to create and run the container:

docker run -name mynewcont -v /var/www:/var/www -p iliyan/mynewimg

You’ll now have nginx working on localhost:8080 and serving files from the local /var/www path.

Now you see you have to change something inside the container and keep the changes. Let’s make it that way:

Keep the currently running cotainer intact.

Create a new container from the same image the running container is created from. Just don’t use the same volumes and ports, actually don’t use any extra params:

docker run -name mynewcont_tmp iliyan/mynewimg bash

You can also use:

docker run -rm ....

to destroy the container after it’s stopped and will not need to “docker rm …” it later.

Now you may want to add new vhosts in /etc/nginx/sites-available or change the default configuration. Use echo or nginx to create new file/change existing ones.

After the changes are done, detach from the container (Ctrl-p + Ctrl-q) and commit the tmp container over our image with a new tag (tags help starting a container from different configurations made on that image):

docker commit -run='{"Cmd":["sh","/nginx.sh"]}' mynewcont_tmp iliyan/mynewimg:addedmydomain

The tag “addedmydomain” is an example one, you can write anything there to describe tha latest changes. You can later commit with the same tag and different tags using the same image and will have them all listed with:

docker images

You can also see the -run=” param is always used to tell the image what to do. I’ve tested some inheritance without success but after docker updates or my skills improve we may not need to repeat the param.

Now make the procedure with stop/kill and rm of the currently running container:

docker stop mynewcont mynewcont_tmp
docker rm mynewcont mynewcont_tmp

Run the container with the change we made just now:

docker run -name mynewcont -v /var/www:/var/www -p iliyan/mynewimg:addedmydomain

That’s it!

To conclude:

1. We’ve made a container from Ubuntu:13.10 and started Bash inside, installed new packages, ran servers, etc.

2. We attached and detached to/from the container often to make some configuration changes until we were satisfied.

3. Then we committed the container into an image and used that image to run the container again but with the latest changes inside. Before running the same image/container we made sure the currently running one is stopped and removed(remove only if you don’t need to go back to the container for some reason and you want to use the same name to run a new one on its place).