No Internet access from your Docker container? Check this out!

After a recent update I started having issues with my containers that hosted apps which were accessing the outside world periodically. At first I couldn’t understand where the problem is. I’ve been checking a lot Github issues without success and finally I remembered that I’ve made an update to my host which updated the kernel.

What I needed now is to find a way to boot with an older kernel selected by default. I started looking in Stackoverflow pages and blogs and found a way to set any kernel that I already have installed as the default one.

Here are the steps I took to fix my problem(my host is running Ubuntu linux):

  • check the /boot/grub/grub.cfg file for all available options you have. The name of every kernel installed is there. Pick one that is older than the latest installed. You have to look for entries like: “menuentry ‘Ubuntu, with Linux 3.13.0-113-generic'”. You will use the “Ubuntu, with Linux 3.13.0-113-generic” part
  • next open the /etc/default/grub file and look for the GRUB_DEFAULT entry
  • combine your kernel config name from point one from above with “Advanced options for Ubuntu” or “Previous Linux versions” for older Ubuntu versions (<14.04). You’ll have to have a similar string like this one now: “Previous Linux versions>Ubuntu, with Linux 3.13.0-92-generic”. Set GRUB_DEFAULT to this string like: GRUB_DEFAULT=Previous Linux versions>Ubuntu, with Linux 3.13.0-92-generic
  • you can use numeric values for GRUB_DEFAULT but it’s not recommended as the number will point to a random kernel config after the next update
  • save, run sudo update-grub and restart

Now you should have an Internet access from inside the Docker containers. This is just a temporary solution so don’t leave it like this, especially running with an older kernel. Better check you configuration and even reinstall everything on your host, use a newer linux distro, etc.

Source

Docker 1.9.0 and the new network configuration

Docker 1.9 is here and it introduces a new way to handle the networking between containers.
Because docker containers should live a short live before being replaced with their new versions one should ask himself do we really need the static IP that was existing until now and assigned to the Docker’s bridge? The usual IP you will see was 172.17.42.1.

After upgrading this IP will be gone. Instead other IPs will be created dynamically. Of course there will be another IP like 172.17.0.1 assigned to the docker0 bridge. You can use it if you are brave enough but better not.
However if you need a quick fix before going to bed you can use the Docker’s –bip parameter to set the bridge IP back to 172.17.42.1.

Another way is to go back to version 1.7.1 using your OS’s package manager or direct install/compile.
Later when you decide to start using Docker’s networking the right and better way, you can start from here.

Node.js v4.0.0 is here!

So we lived to see it. Node.js version 4 is here which means we have the latest V8, ES6 support and the latest security patches for our favorite tool! Well the previous statement cannot describe how much good things just happened. From the creation of Node.js, through the fork of IO.js, until finally the latest version and the merger happened and now the community has the word how Node.js will be shaped from now on.

I am personally very excited about that and I am currently going to test and update one of my Docker apps that installs and uses Node.js through NVM.

I am also expecting a lot of work on this major version and the patches and new features are coming sooner than later which makes it very interesting to use Node.js these days!

Enjoy!

How can you tell if a programmer knows Docker in 5 questions? | The Snap.hr Blog

Source: How can you tell if a programmer knows Docker in 5 questions? | The Snap.hr Blog

I struggled only on the last question about the difference between AUFS and DeviceMapper but these kind of questions always help me to find what I have to know to pass them. And I love to learn new things!

Do 404s hurt my site? | Official Google Webmaster Central Blog

I’ve been through this. Testing and redirecting wrong urls and missing pages. I have a site with a couple of million pages where soft 404 and also not found pages can increase to hundreds of thousands after a big automatic update. I managed to handle all of these pages as a single developer of my own sites by refactoring and testing the code making it more predictable so when I change multiple pages I know what to expect. Here comes the very good communication with Google’s Webmaster tools with the help of which I discover bad things on time and can even improve an already a good positioned website.

Of course one of the big things to watch out for are 404 not found and 500 server errors. So let’s read about the former here:

Source: Official Google Webmaster Central Blog: Do 404s hurt my site?

Put your GitLab on HTTPS

For this article I will use the following configuration:

GitLab Docker image by sameersbn – Since the rise of Docker LXC I prefer to encapsulate all of my apps in production with it.

Docker is really easy to install. Just follow the instructions from here: https://docs.docker.com/installation or directly for Ubuntu users here: https://docs.docker.com/installation/ubuntulinux/.

Then follow the installation instructions of the Docker image here: https://github.com/sameersbn/docker-gitlab#installation.

Of course you can just install GitLab for you OS using their instructions here: https://about.gitlab.com/installation/.

My specific configuration is to put the Docker GitLab container behind a main nginx which becomes a reverse proxy for the container. Only the GitLab SSH console’s port is mapped directly to the host.

You may decide to map the GitLab ports directly to the host’s external IP and use it that way. The instructions on the Docker GitLab image are given with this in mind.

Now let’s say you have a running GitLab server behind or not an nginx reverse proxy. It’s working with ssh:// and http:// access for git clone/pull/push. You are getting used to it and now you want to make it the professional way: using HTTPS!

The first thing is to consider where will you get an SSL certificate from?

There are many companies out there selling good certificates for web sites, mail servers, etc. Some of them are expensive, others are very cheap. It depends on that whether you need to use it for multiple subdomains, will it be recognizable by all browsers and other software and so on. The third type are free. They are limited but they are recognized by the browsers and in our case by git/curl/OS too. Since they are free we don’t need to create self signed certificates and force everybody using our site to install certificates locally or to accept the warnings.

My personal choice is StartSSL. They give you a free certificate, it’s CA authorized and it’s working with a subdomain. I haven’t checked yet StartSSL’s other alternatives which are paid and also other providers of free certificates but for our case we only need the certificate for one thing and one place. In my case: https://gitlab.iliyan-trifonov.com and git clone https://gitlab.iliyan-trifonov.com/..project-name.git.

Now go to the free certificate page on StartSSL and sign up. Create a backup of the certificate you will install on your browser because without it you will have to pay to recover your account access. When you’re ready and you’re in the Control Panel of the site, authorize your email and domain. Also for a subdomain I picked gitlab as in gitlab.iliyan-trifonov.com.

From StartSSL you will get a .key file and a .crt file. Back them up.

There is a little more work to make the certificate compatible with git. Before that you can already use the key and crt files in your web server and make your site using https. But this is not enough for us so let’s continue.

First decrypt the key file using the following command and the password you provided while generating it on StartSSL:

openssl rsa -in ssl.key -out ssl-decrypted.key

This will prevent your web server to ask you for the password every time it is started. Imagine the downtime if the server was restarted automatically and it is waiting for a human to continue its work.

We need 2 more files freely available from StartSSL to make our certificate combined with their Root CA and Intermediate CA:

wget http://www.startssl.com/certs/ca-sha2.pem
wget http://www.startssl.com/certs/sub.class1.server.sha2.ca.pem

Combine the 3 certificate files you have:

cat ssl.crt sub.class1.server.sha2.ca.pem ca-sha2.pem > ssl-unified.crt

There is a possibility that the concatenated crt file will have some BEGIN/END lines on the same line. If your web server is not happy with that and says bad end of file in its logs, do this: open ssl-unified.crt with a text editor like nano and search for a line like this:

-----END CERTIFICATE----------BEGIN CERTIFICATE-----

or

-----BEGIN CERTIFICATE----------END CERTIFICATE-----

Make sure BEGIN and END are on a separate files and also that the B of the BEGIN and E of the END are on the same column, like this:

-----BEGIN CERTIFICATE-----
-----END CERTIFICATE-----

Not like this or similar:

-----BEGIN CERTIFICATE----
------END CERTIFICATE-----

Dashes on the left/right should be equal.

Now the only 2 files you need to use in your web server are ssl-unified.crt and ssl-decrypted.key.

It’s time to go to Nginx and make the configuration:

server {
    server_name gitlab.yourdomain.com;
    listen 80;
    
    return 301 https://$server_name$request_uri;

    access_log /var/log/gitlab.access.log;
    error_log /var/log/gitlab.error.log;
}

server {
    server_name gitlab.yourdomain.com;
    listen 443 ssl;

    ssl_certificate /.../certs/ssl-unified.crt;
    ssl_certificate_key /.../certs/ssl-decrypted.key;

    access_log /var/log/gitlab.ssl.access.log;
    error_log /var/log/gitlab.ssl.error.log;

    location / {
        proxy_set_header X-Real-IP $remote_addr;
         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
         proxy_set_header Host $http_host;
         proxy_set_header X-Forwarded-Proto $scheme;
         proxy_redirect off;
         proxy_read_timeout 300;
         proxy_pass http://INTERNAL_IP:INTERNAL_PORT; #the docker gitlab container's address
     }
}

Remember that this example Nginx configuration is for Nginx running as a reverse proxy to the container. In your case you may have decided to not have a reverse proxy and then you have to use the really easy configuration of the Dockerized GitLab from the link above or change the GitLab’s server configuration in a similar way. Still the preparation of the good .crt and .key files and the server configuration will give you a good start.

For better security read Optimizing HTTPS on NginxConfiguring HTTPS servers and Enabling Perfect Forward Secrecy.

At the end you will have your GitLab pages loaded with HTTPS and everybody can git clone https://yourgitlabserver.com/..project.git like a pro!