Most requested technologies by number of available jobs on Stack Overflow

Lately I’ve been thinking about which technologies are “trending”. Are employers looking for more Java developers, data scientists, machine learning specialists or Javascript experts? And if you truly want to be a full stack developer, what should you be devoting your time to learning?

My curiosity led me to write a quick script to crawl Stack Overflow and extract data from their current available job postings. Unfortunaly due to Stack Overflow’s four week limit in active job posts, the data may not be as revealing as, say, six months’ to a year’s worth of sampling.

Running the script this afternoon returned the following results. The top 20 technologies that are tagged most frequently in Stack Overflow’s current job board postings are as follows.

Technology Jobs
java 163
javascript 153
python 90
c# 86
reactjs 80
node.js 62
amazon-web-services 59
linux 56
angularjs 53
sql 51
php 43
c++ 43
cloud 42
.net 41
css 40
html5 33
html 32
mysql 30
android 29
postgresql 28

How did I get these numbers?

require 'nokogiri'
require 'open-uri'
require 'byebug'
require 'terminal-table'

pages = *(1..27)
data = {}

pages.each do |page|
  puts "Scrapping page #{page}\n"
  doc = Nokogiri::HTML(open("{page}"))

  doc.css('').each do |link|
    text = link.children.text
    if data.key?(text)
      data[text] += 1
      data[text] = 1

rows = data.sort_by { |a| a[1] }.reverse
table = rows: rows

puts table

As you can see that’s Ruby…So, yeah. Not exactly encouraging results.

In case you’re curious, the full dataset is available below.

Technology Jobs
java 163
javascript 153
python 90
c# 86
reactjs 80
node.js 62
amazon-web-services 59
linux 56
angularjs 53
sql 51
php 43
c++ 43
cloud 42
.net 41
css 40
html5 33
html 32
mysql 30
android 29
postgresql 28
scala 27
sysadmin 26
spring 25
ios 25
ruby 25
agile 23
machine-learning 23
docker 23
rest 21
user-interface 21
web-services 21
sql-server 20
automation 20
azure 19
qa 19
mongodb 18 18
ruby-on-rails 18
angular 17
testing 17
design 17
project-management 17
css3 16
mobile 16
hadoop 16
go 16
java-ee 15
apache-spark 15
swift 13
objective-c 13
devops 13
git 12
tdd 12
unix 12
r 11
nosql 11 11
typescript 11
c 11
user-experience 10
embedded 10
xml 9
django 9
jenkins 9
saas 9
hibernate 9
selenium 9
redis 9
oracle 8
elasticsearch 8
scrum 8
jquery 8
continuous-integration 8
bigdata 7
puppet 7
react-native 7
spring-boot 7
api 7
microservices 7
clojure 7
google-cloud-platform 6
windows 6
oop 6
vue.js 6
kotlin 6
chef 6
redux 6
algorithm 6
http 6
sharepoint 5
kubernetes 5
frontend 5
automated-tests 5
openstack 5
tensorflow 5
data-science 5
symfony2 4
visual-studio 4
powershell 4
maven 4
backend 4 4
tsql 4
react 4
wpf 4
osx 4
artificial-intelligence 4
rabbitmq 4
spring-mvc 4
sass 4
groovy 4
salesforce 4
json 4
analytics 3
eclipse 3
xcode 3
deep-learning 3
sap 3
coffeescript 3
storage 3
linux-kernel 3
hybris 3
perl 3
photoshop 3
ecmascript-6 3
qt 3
release-management 3
configuration 3
open-source 3
augmented-reality 3
react-redux 3
jira 3
heroku 3
d3.js 3
soap 3
rx-java 3
laravel 3
less 3
data 3
amazon-ec2 3
soa 3
terraform 3
api-design 3
android-studio 3
iot 3
knockout.js 3
etl 3
elixir 2
ux 2
scripting 2
google-analytics 2
database 2
phpunit 2
.net-core 2
iis 2
vhdl 2
agile-project-management 2
memcached 2
entity-framework 2
responsive-design 2
security 2
lambda 2
amazon-s3 2
active-directory 2
apex-code 2
search 2
ansible 2
product 2
cython 2
mule 2
lamp 2
adobe 2
kernel 2
operating-system 2
apache 2
wordpress 2
akka 2
citrix 2
symfony 2
drupal 2
release 2
performance-testing 2
wireframe 2 2
angular2 2
distributed-system 2
amazon-iam 2
rdbms 2
directx 2
opengl 2
data-visualization 2
python-3.x 2
sdlc 2
usability 2
cocoa 2
junit 2
mvc 2
continuous-deployment 2
nginx 2
waterfall 2
event-sourcing 2
lucene 2
solr 2
3d 2
network 2
front-end 2
system 2
unit-testing 2
zend-framework 2
arm 2
rtos 2
server 2
scalability 2
sdk 2
ember.js 2
caffe 2
large-scale 2
fpga 2
aws 2
data-warehouse 2
business-intelligence 2
apache-kafka 2
ui 1
specflow 1
cyberark 1
iam 1
pandas 1
database-design 1
arkit 1
tableau 1
robotframework 1
agile-processes 1
redhat 1
glusterfs 1
polyglot 1
nodes 1
functional-programming 1
iaas 1
ms-access 1
azure-sql-database 1
browserstack 1
selenium-webdriver 1
scikit-learn 1
messaging 1
e-commerce 1
unity3d 1
ibm-watson-cognitive 1
bpm 1
data-structures 1
bdd 1
spock 1
java8 1
xilinx 1
lattice 1
altera 1
jpa 1
spring-cloud-netflix 1
ionic-framework 1
office365 1
wsdl 1
uml 1
webmethods 1
tomcat 1
desktop 1
sccm 1
ember 1
shopify 1
liquid 1
caml 1
distributed 1
shell 1
ssis 1
powerview 1
azure-service-fabric 1
actionscript-3 1
mesos 1
es6 1
software-design 1
mapreduce 1
integrated 1
globalization 1
orchestration 1
relational-database 1
oauth 1
saml 1
magento2 1
magento 1
storyboard 1
uikit 1
scrummaster 1
dynamics-crm 1
stl 1
vert.x 1
plsql 1
kpi 1
projects 1
key 1
crm 1
jee 1
pair-programming 1
code-review 1
production-support 1
itil 1
cmake 1
boost 1
disaster-recovery 1
iis-8 1
python-2.7 1
google-chrome-devtools 1
gecko 1
blink 1
webkit 1
recommendation-engine 1
personalization 1
bonita 1
coq 1
haskell 1
mvvm 1
playframework 1
android-layout 1
performance 1
ajax 1
virtualization 1
vpn 1
nas 1
vmware 1
caching 1
high-availability 1
adobe-illustrator 1
nhibernate 1
research 1
data-science-experience 1
visualforce 1
soql 1
solution 1
erlang 1
sharepoint-2013 1
jvm 1
emr 1
navision 1
windows-server-2012 1
jetty 1
salesforce-lightning 1
apex 1
bower 1
amazon-cloudformation 1
mongo 1
spinnaker 1
security-testing 1
fortify 1
uft-api 1
hardware 1
share-point 1
sitecore 1
data-modeling 1
collaborative-filtering 1
firmware 1
php-5.5 1
tao 1
uft14 1
juniper-network-connect 1
juniper 1
voip 1
angular-fullstack 1
realm 1
avfoundation 1
swift3 1
atlassian 1
github 1
php-7 1
sketch-3 1
desktop-application 1
scala.js 1
dns 1
exchange-server 1
ejb 1
jsp 1
jsf 1
embedded-linux 1
linux-device-driver 1
version-control 1
cognos-tm1 1
twincat 1
ros 1
vault 1
bitbucket 1
cuda 1
tcp-ip 1
mqtt 1
xmpp 1
m2m 1
ssas 1
apache-samza 1
infrastructure 1
model-view-controller 1
full-text-search 1
mapbox-gl-js 1
gis 1
rx-java2 1
backbone 1
geopandas 1
integration 1
c++11 1
c#-4.0 1
native 1
client 1
webpack 1
rails 1
nlp 1
react-router 1
express 1
mangodb 1
extjs 1
cassandra 1
internationalization 1
viper 1
espresso 1
intershop 1
continuous-delivery 1
backbone.js 1
ubuntu 1
debian 1
mysql-workbench 1
laravel-5 1
gradle 1
visualization 1
enterprise 1
stream 1
verification 1
validation 1
systems 1
vaadin 1
phonegap-plugins 1
cordova 1
gwt 1
bash 1
graphics 1
center 1
dataservice 1
workspace 1
gcp 1
aws-opsworks 1
low-latency 1
zend 1
social 1
android-gradle 1
twitter-bootstrap 1
jquery-ui 1
f# 1
computer-architecture 1
fog 1
containers 1
eda 1
mdx 1

Special thanks to my editor wife, Becky Quintal whose demanding pursuit of engaging text led her to co-author this post.

Authorizing Rails (micro)services with JWT


So, let’s say that like me, you don’t want to implement OAuth for communicating your Rails API calls, but you’re still looking for something safer that just create your own new sloppy token schema. Well, a pretty good alternative are JWT (JSON Web Tokens). I’m not going to explain the concept in a very technical way because I’m all about implementation, but basically it works like this:

  • In the secured API you have a database with user credentials (e.g. username and password)
  • The application that makes requests generates a POST request to some login endpoint in the API (e.g. POST /user_token)
  • The API generates a token using a secure algorithm that contains all the necessary information about the user that’s making the request
  • The application uses this token for all the following requests by sending it in their headers
  • The API decodes the token and authorizes the user to receive the responses.

That’s JWT at their basics. Of course there’s a lot more technicalities behind them and there are a lot of good resources out there to learn.


We’re going to use the knock gem. This gem wraps a big part of the complexity and it integrates itself very nicely with the native Rails authentication.

First, let’s generate an API. I’m using ruby 2.3.1 and Rails 5.0.0.rc1:

  $ rails new my_api --api

Now let’s create our users table with secure password:

  $ rails g model user email:string password_digest:string
  $ rake db:migrate

Install the knock gem (following the repo instructions):

gem 'knock'
$ bundle install
$ rails generate knock:install
$ rails generate knock:token_controller user

Those commands will generate an initializer with some customization options and the route and controller for retrieving the token.

Now let’s add the secure password method to our model so knock can have an authentication method:

class User < ApplicationRecord

Now open a rails console and create a user:

 $  rails c
  > User.create(email: '', password: 'securepassword', password_confirmation: 'securepassword')

Cool, we are almost there. Open your Application Controller and add the Knock Module to it:

class ApplicationController < ActionController::API
  include Knock::Authenticable

And that’s it for the setup. Now we can start creating resources and adding a filter. Let’s add new resource so we can test it:

$ rails g resource articles title:string body:text
$ rake db:migrate

And create some entries:

  $ rails c
  > Article.create(title: 'first article', body: 'first article body')
  > Article.create(title: 'second article', body: 'second article body')

Now, open the controller and add this filter and action:

class ArticlesController < ApplicationController
  before_action :authenticate_user

  def index
    render json: Article.all

That index action is secured. First let’s try to hit that endpoint without authentication via cURL:

$ rails s --port 3000
$ curl -I localhost:3000/articles

HTTP/1.1 401 Unauthorized
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Content-Type: text/html
Cache-Control: no-cache
X-Request-Id: fec8f4c4-b8f4-40f6-9971-7b6f0438f8cd
X-Runtime: 0.141929

Nice! We have a 401 response from the server. That means the filter is working. Now let’s hit the route that can gives us a token by passing the credentials:

$ curl -H "Content-Type: application/json" -X POST -d '{"auth":{"email":"","password":"securepassword"}}' http://localhost:3000/user_token


If we get the JWT token, it means that the login was successful. Now we can make requests by sending that token in the header in the following way:

$ curl -i http://localhost:3000/articles -H "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJleHAiOjE0NjUwOTYxMzMsInN1YiI6MX0.e9yeOf_Ik8UBE2dKlNpMu2s6AzxvzcGxw2mVj9vUjYI"

HTTP/1.1 200 OK
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Content-Type: application/json; charset=utf-8
ETag: W/"56960b8def640a1b6091df1cd3b0976e"
Cache-Control: max-age=0, private, must-revalidate
X-Request-Id: d5ee7045-546d-478e-9748-a11d99a6a00f
X-Runtime: 0.014365
Transfer-Encoding: chunked

[{"id":1,"title":"first article","body":"first article body","created_at":"2016-06-04T03:04:25.997Z","updated_at":"2016-06-04T03:04:25.997Z"},{"id":2,"title":"second article","body":"second article body","created_at":"2016-06-04T03:04:39.995Z","updated_at":"2016-06-04T03:04:39.995Z"}]%

We got our articles back, so it’s working. Now let’s see how to consume this endpoint from another Rails service. Let’s create a new application:

  $ rails new consumer --api

Let’s add a route:

Rails.application.routes.draw do
  get '/articles', to: 'articles#index'

And a controller with an action that’s going to make a call to the articles endpoint:

class ArticlesController < ApplicationController
  def index


Now, first we have to make a post request in order to get the token back:

    uri = URI.parse('http://localhost:3000/user_token')
    req =, initheader = {'Content-Type' =>'application/json'})
    req.body = { auth: {email: '', password: 'securepassword'}}.to_json
    res = Net::HTTP.start(uri.hostname, uri.port) do |http|

    jwt_token = JSON.parse(res.body)['jwt']

That’s a pretty simple use of the Net::HTTP ruby library, basically the same thing we did with cURL.

Now that we have that token, we can send it along the request:

    uri = URI.parse("http://localhost:3000/articles")
    Net::HTTP.start(, uri.port) do |http|
      request = uri
      request.add_field("Authorization", "Bearer #{jwt_token}")
      response = http.request request
      render json: JSON.parse(response.body)

You see that’s a regular http get request, but with a header that contains the Authorization field.

Let’s run this application and see if we can get the response:

$ rails s --port 4000
$ curl http://localhost:4000/articles

[{"id":1,"title":"first article","body":"first article body","created_at":"2016-06-04T03:04:25.997Z","updated_at":"2016-06-04T03:04:25.997Z"},{"id":2,"title":"second article","body":"second article body","created_at":"2016-06-04T03:04:39.995Z","updated_at":"2016-06-04T03:04:39.995Z"}]%

Great! If you hit that URL in you’re browser you should also see the JSON response. If you don’t see the response, make sure you’re passing the correct credentials when getting the token

Your final controller should look like this:

class ArticlesController < ApplicationController
  def index
    uri = URI.parse('http://localhost:3000/user_token')
    req =, initheader = {'Content-Type' =>'application/json'})
    req.body = { auth: {email: '', password: 'securepassword'}}.to_json
    res = Net::HTTP.start(uri.hostname, uri.port) do |http|

    jwt_token = JSON.parse(res.body)['jwt']

    uri = URI.parse("http://localhost:3000/articles")
    Net::HTTP.start(, uri.port) do |http|
      request = uri
      request.add_field("Authorization", "Bearer #{jwt_token}")
      response = http.request request
      render json: JSON.parse(response.body)


In case you wonder, YES! You should refactor this code into a service or whatever thing you use for reusing code and avoiding big methods.

And that’s it! A very light and easy to implement mechanism for communicating your Rails API services (or microservices).

Thanks for reading.

Automatic Log into ECS Container Instances

The title of this this post is kind of ambiguous. I didn’t know how else to put it. I’ll describe it better telling you about the problem we had.

We have an ECS cluster with several container instances running different services. If you have played before with Amazon ECS you’ll know that’s pretty difficult to ssh inside of your applications in a fast way. A lot of people say that is a bad practice to do this, but at the end, when you’re running applications in production environments, eventually you are going to need to access the application. Maybe for inspecting a log, for running a task or a command, etc.

Amazon ECS uses the concept of services and tasks for running applications. The container is going to be launch via a task which is going to be managed and scheduled by a service. Since ECS wraps the docker container with the elements of its own architecture, it can be difficult to find your tasks and log into your container. The good thing is you have access to the AWS API (which is a fantastic API IMO) and using a short script it’s possible to find the instance that’s running some task by passing the service name.

In this case I’m using the ruby version of the API:

#!/usr/bin/env ruby

require 'rubygems'
require 'bundler/setup'
require 'aws-sdk'

# pass the service name as the only argument
service_name = ARGV[0]

region: 'YOURREGION',
credentials:'YOURKEY', 'YOURSECRET'),

# we'll need to use both the ecs and ec2 apis

# first we get the ARN of the task managed by the service
task_arn = ecs.list_tasks({cluster: 'MYCLUSTER', desired_status: 'RUNNING', service_name: service_name}).task_arns[0]

# using the ARN of the task, we can get the ARN of the container instance where its being deployed
container_instance_arn = ecs.describe_tasks({cluster: 'MYCLUSTER', tasks: [task_arn]}).tasks[0].container_instance_arn

# with the instance ARN let's grab the intance id
ec2_instance_id = ecs.describe_container_instances({cluster: 'MYCLUSTER', container_instances: [container_instance_arn]}).container_instances[0].ec2_instance_id

# we need to describe the instance with this id using the ec2 api
instance = ec2.describe_instances({instance_ids: [ec2_instance_id]}).reservations[0].instances[0]

# finally we can get the name of the instance we need to log in
name ={|n| n['key'] == 'Name'}[0].value

# ssh into the machine
exec "ssh #{name}"

Now you can run ./myscript service-name and you’ll be automatically logged into the container instance that’s running your task. Then you can run docker ps to get the container id and finally docker exec -it CONTAINER_ID bash to log into the container. Much faster than going to the ECS web console or running docker ps in all your cluster instances until you find the one that has the task you’re looking for.

I’m not sure if there’s a better way of doing this but it works for my use case. For the automatic login, you’ll need to have an alias for each instance in your ssh config file:

Host name-1
HostName XX.XX.XX.XX
User ec2-user
IdentityFile /path/to/my/pem/file

Host name-1
HostName XX.XX.XX.XX
User ec2-user
IdentityFile /path/to/my/pem/file

This way if the script find the host named name-1, it can run ssh name-1 and then you can log into your container.

If you’re interested in learn more about Rails and Amazon ECS, I’m writing a book that covers all the essential parts in the deployment process.

That’s it, thanks for reading!

Caching API requests in Rails

I’m currently working on a project that makes a lot of calls to external APIs such as Google Analytics, Mailchimp, Google’s double click for publishers, etc. And a couple of internal API calls. All of this for generating reports for our commercial team.

Generating these reports can take up to 2-3 minutes, so it would be nice to have some kind of cache mechanism. This way if somebody wants to revisit the report later, it’s not going to take that amount of time for the report being displayed.

The library that I’m using for making the requests it’s called Typhoeus. It’s pretty cool because it wraps all of the dirty and difficult to remember methods from the lower level HTTP libraries in a very clean DSL. And a very pleasant thing that I discovered is that it includes build in support for caching.

Suppose we have this method that calls some external API:

def my_method

Every time you call this method, you’re going to be hitting that endpoint. Now, with Typhoeus you can declare a cache class and pass that class to the cache configuration using an initializer:

class Cache
def get(request)

def set(request, response)
Rails.cache.write(request, response)

Typhoeus::Config.cache =

Remember that if you want to test this in development mode, you must have this line in your config/environments/development.rb:

config.action_controller.perform_caching = true

And that’s it. Now the first time you call some endpoint using Typhoeus, the result will be cached and will be served by the cache system that you’re using in your Rails application.

One thing that’s not very clear in the Typhoeus documentation, is how to pass options to the Cache class methods. In my case I needed an expiration time. After some research I found out that is as simple as passing the options as a third argument to the method, so in my case it would be:

def set(request, response)
Rails.cache.write(request, response, expires_in: 3.hours)

Lastly, remember that all of the responses will be cached, even the bad ones. So if your endpoint responds with an error, you’ll have to clear the cache. Remember that is not a good practice to parse requests using the JSON library without checking if the response is correct first. Or else you’ll ended up having very ugly errors.

Thanks for reading!

Rails development with Docker and Vagrant

I’m developing Rails applications most of my time, so I’ve been trying to create a flexible and comfortable development environment that can be easily reproduced in production. This means that I want to start to develop right away using a real production app server and a real database, so no webrick or sqlite in this post, just real useful stuff.


I’m going to show you how to setup a Rails environment using Nginx and Passenger for serving your application, and MySQL for your data. I know a lot of people prefer postgresql but the setup is pretty similar (I’m using MySQL for work related reasons).

We will use Docker inside Vagrant. I think this approach is more flexible and universal that using just Docker since that can generate inconsistencies between workspaces using boot2docker in OS X (like me) and workspaces using Linux distributions as host machine. Besides, Vagrant gives us native docker provisioning which can reduce a lot of Docker typing.

Note: I know about tools like Docker compose but since it’s still not suitable for production, I prefer to use just native Docker commands for linking and running my containers.


Create a Rails applicacion

We’re going to start with a fresh Rails application. So in your local machine create a new applicacion and select MySQL as the database.

rails new myapp -d mysql

Dockerfile for the application

We can use the official Passenger image for getting a crafted environment configured by the official phusion team. Following the instructions from the repository, you get a very minimal Dockerfile.

FROM phusion/passenger-ruby22:0.9.15

# Set correct environment variables.
ENV HOME /root

# Use baseimage-docker's init process.
CMD ["/sbin/my_init"]

# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

# Active nginx
RUN rm -f /etc/service/nginx/down

# Copy the nginx template for configuration and preserve environment variables
RUN rm /etc/nginx/sites-enabled/default
ADD myapp.conf /etc/nginx/sites-enabled/myapp.conf
ADD mysql-env.conf /etc/nginx/main.d/mysql-env.conf

# Create the folder for the project and set the workdir
RUN mkdir /home/app/myapp
WORKDIR /home/app/myapp

# Copy the project inside the container and run bundle install
COPY Gemfile /home/app/myapp/
COPY Gemfile.lock /home/app/myapp/
RUN bundle install
COPY . /home/app/myapp

# Set permissions for the passenger user for this app
RUN chown -R app:app /home/app/myapp

# Expose the port

The myapp.conf is just a basic nginx configuration for serving the application:

server {
listen 80;
root /home/app/myapp/public;

passenger_enabled on;
passenger_user app;

passenger_ruby /usr/bin/ruby2.2;

And the mysql-env.conf file is necessary for preserving the environment variables passed from Docker to passenger. You can find more info about this in the image repository. In this case we just need the variables comming from the MySQL container that we will be linking with our app. If you need to pass more enviroment variables, just put them in this file


Put this files in the root of your application (Dockerfile, myapp.conf and mysql-env.conf).

Vagrant stuff

For Vagrant, create a new folder in your application and initialize it with a fresh Vagrantfile

mkdir vagrant
cd vagrant
vagrant init

Replace the generated Vagrantfile with the following configuration:

# -*- mode: ruby -*-

Vagrant.configure(2) do |config| = "trusty"

config.vm.box_url = "" "forwarded_port", guest: 80, host: 8080 "private_network", ip: ""

config.vm.synced_folder "../", "/myapp", :mount_options => ["uid=9999,gid=9999"]

config.vm.provider "virtualbox" do |vb|
vb.memory = "2048"

config.vm.provision "docker" do |d|
d.pull_images "mysql:5.7"

d.build_image "/myapp", args: "-t myapp" "mysql:5.7",
auto_assign_name: false,
daemonize: true,
args: "--name myapp-db -e MYSQL_ROOT_PASSWORD=myapp" "myapp",
auto_assign_name: false,
daemonize: true,
args: "--name myapp -p 80:80 --link myapp-db:mysql -e PASSENGER_APP_ENV=development -v '/myapp:/home/app/myapp'"


lets analyze this file. = "trusty"

config.vm.box_url = "" "forwarded_port", guest: 80, host: 8080 "private_network", ip: ""

This is just regular Vagrant stuff, we’re fetching the trusty image for Ubuntu, forwarding ports to our host machine and setting a private network in order to access our running applicacion using our host machine browser.

  config.vm.synced_folder "../", "/myapp", :mount_options => ["uid=9999,gid=9999"]

This line is important. We’re sharing our application folder, but in order to not messed up the permissions for the passenger user (with uid 9999) we have to set permissions for the mounted folder.

  config.vm.provision "docker" do |d|
d.pull_images "mysql:5.7"

d.build_image "/myapp", args: "-t myapp" "mysql:5.7",
auto_assign_name: false,
daemonize: true,
args: "--name myapp-db -e MYSQL_ROOT_PASSWORD=myapp" "myapp",
auto_assign_name: false,
daemonize: true,
args: "--name myapp -p 80:80 --link myapp-db:mysql -e PASSENGER_APP_ENV=development -v '/myapp:/home/app/myapp'"

This section is where the magic happens. Using the Docker provisioning we can automate several stuff (I’m using Vagrant 1.7.2 in case you wonder).

Firt, we tell vagrant that we want to pull the MySQL image from the Docker registry in order to be available right away after a provisioning. Next, we’re telling Vagrant that we have a local image in our shared folder and we want to build it and call it “myapp”. This way Vagrant is going to look for a Dockerfile in that folder and execute a Docker build using the provided args. Pretty neat.

The following two segments are necessary for running the previously pulled and built images. The MySQL image is being runned in a very standard way.

For our “myapp” application we need to expose the port 80 from the container to the host, create a Docker link with the mysql container, set the passenger environment variable, and mounting a volume for working locally and not have to rebuild the image every time we make changes in the code.

The last thing we need to do is change the MySQL configuration in our config/databases.yml file.

default: &default
adapter: mysql2
encoding: utf8
pool: 5
username: root
host: <%= ENV['MYSQL_PORT_3306_TCP_ADDR'] %>

<<: *default
database: myapp_development

<<: *default
database: myapp_test

<<: *default
database: myapp_production

Here we’re using the environment variables that the MySQL container shared with the Rails application container.

Running all the stuff

Now that’s all in place, we can run

cd vagrant
vagrant up

and wait until the command is finished.


In order to verify that nothing went wrong, we can go to the VM IP ( and check if the Rails application is running.

The first time you should see this error:


Easy to fix, we just have to execute a rake db:create command inside the passenger container. Remember that we named it ‘myapp’:

vagrant ssh
cd /myapp
docker exec -it myapp rake db:create

Now if you visit the ip you should see the classical Rails welcome. Great! welcome


I’m not sure if there’s a convention about how to work with Rails and containers yet. But in my case I haven’t had problems using the shared folders and running the Rails and rake commands against the container.

For example, if you want to scaffold something, you can do something like this:

vagrant ssh
cd /myapp
docker exec -it myapp rails g scaffold posts title body:text
docker exec -it myapp rake db:migrate

Then if you visit the VM ip in the /posts route, you’ll see your scaffold running as usual. The data is connected to the MySQL database container


One important detail is that Vagrant is going to run your container only during provisioning. If you run a vagrant halt and then just vagrant up, your images are still going to be there, but they are not going to be running.

In my case it’s fine run vagrant up –provision every time, since pulling the images is going to be super fast thanks to the Docker cache.

The beauty of all this setup, is that if you want to deploy your application in production, you just need a machine with Docker installed and run your containers, and you can be pretty sure that is going to work in the same way as in your development environment.

In future posts I’ll talk more about what I’ve learn about deploying and manage your containers in different nodes of a cluster in production.

Thanks for reading!.