Authorizing Rails (micro)services with JWT


So, let’s say that like me, you don’t want to implement OAuth for communicating your Rails API calls, but you’re still looking for something safer that just create your own new sloppy token schema. Well, a pretty good alternative are JWT (JSON Web Tokens). I’m not going to explain the concept in a very technical way because I’m all about implementation, but basically it works like this:

  • In the secured API you have a database with user credentials (e.g. username and password)
  • The application that makes requests generates a POST request to some login endpoint in the API (e.g. POST /user_token)
  • The API generates a token using a secure algorithm that contains all the necessary information about the user that’s making the request
  • The application uses this token for all the following requests by sending it in their headers
  • The API decodes the token and authorizes the user to receive the responses.

That’s JWT at their basics. Of course there’s a lot more technicalities behind them and there are a lot of good resources out there to learn.


We’re going to use the knock gem. This gem wraps a big part of the complexity and it integrates itself very nicely with the native Rails authentication.

First, let’s generate an API. I’m using ruby 2.3.1 and Rails 5.0.0.rc1:

  $ rails new my_api --api

Now let’s create our users table with secure password:

  $ rails g model user email:string password_digest:string
  $ rake db:migrate

Install the knock gem (following the repo instructions):

gem 'knock'
$ bundle install
$ rails generate knock:install
$ rails generate knock:token_controller user

Those commands will generate an initializer with some customization options and the route and controller for retrieving the token.

Now let’s add the secure password method to our model so knock can have an authentication method:

class User < ApplicationRecord

Now open a rails console and create a user:

 $  rails c
  > User.create(email: '', password: 'securepassword', password_confirmation: 'securepassword')

Cool, we are almost there. Open your Application Controller and add the Knock Module to it:

class ApplicationController < ActionController::API
  include Knock::Authenticable

And that’s it for the setup. Now we can start creating resources and adding a filter. Let’s add new resource so we can test it:

$ rails g resource articles title:string body:text
$ rake db:migrate

And create some entries:

  $ rails c
  > Article.create(title: 'first article', body: 'first article body')
  > Article.create(title: 'second article', body: 'second article body')

Now, open the controller and add this filter and action:

class ArticlesController < ApplicationController
  before_action :authenticate_user

  def index
    render json: Article.all

That index action is secured. First let’s try to hit that endpoint without authentication via cURL:

$ rails s --port 3000
$ curl -I localhost:3000/articles

HTTP/1.1 401 Unauthorized
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Content-Type: text/html
Cache-Control: no-cache
X-Request-Id: fec8f4c4-b8f4-40f6-9971-7b6f0438f8cd
X-Runtime: 0.141929

Nice! We have a 401 response from the server. That means the filter is working. Now let’s hit the route that can gives us a token by passing the credentials:

$ curl -H "Content-Type: application/json" -X POST -d '{"auth":{"email":"","password":"securepassword"}}' http://localhost:3000/user_token


If we get the JWT token, it means that the login was successful. Now we can make requests by sending that token in the header in the following way:

$ curl -i http://localhost:3000/articles -H "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJleHAiOjE0NjUwOTYxMzMsInN1YiI6MX0.e9yeOf_Ik8UBE2dKlNpMu2s6AzxvzcGxw2mVj9vUjYI"

HTTP/1.1 200 OK
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Content-Type: application/json; charset=utf-8
ETag: W/"56960b8def640a1b6091df1cd3b0976e"
Cache-Control: max-age=0, private, must-revalidate
X-Request-Id: d5ee7045-546d-478e-9748-a11d99a6a00f
X-Runtime: 0.014365
Transfer-Encoding: chunked

[{"id":1,"title":"first article","body":"first article body","created_at":"2016-06-04T03:04:25.997Z","updated_at":"2016-06-04T03:04:25.997Z"},{"id":2,"title":"second article","body":"second article body","created_at":"2016-06-04T03:04:39.995Z","updated_at":"2016-06-04T03:04:39.995Z"}]%

We got our articles back, so it’s working. Now let’s see how to consume this endpoint from another Rails service. Let’s create a new application:

  $ rails new consumer --api

Let’s add a route:

Rails.application.routes.draw do
  get '/articles', to: 'articles#index'

And a controller with an action that’s going to make a call to the articles endpoint:

class ArticlesController < ApplicationController
  def index


Now, first we have to make a post request in order to get the token back:

    uri = URI.parse('http://localhost:3000/user_token')
    req =, initheader = {'Content-Type' =>'application/json'})
    req.body = { auth: {email: '', password: 'securepassword'}}.to_json
    res = Net::HTTP.start(uri.hostname, uri.port) do |http|

    jwt_token = JSON.parse(res.body)['jwt']

That’s a pretty simple use of the Net::HTTP ruby library, basically the same thing we did with cURL.

Now that we have that token, we can send it along the request:

    uri = URI.parse("http://localhost:3000/articles")
    Net::HTTP.start(, uri.port) do |http|
      request = uri
      request.add_field("Authorization", "Bearer #{jwt_token}")
      response = http.request request
      render json: JSON.parse(response.body)

You see that’s a regular http get request, but with a header that contains the Authorization field.

Let’s run this application and see if we can get the response:

$ rails s --port 4000
$ curl http://localhost:4000/articles

[{"id":1,"title":"first article","body":"first article body","created_at":"2016-06-04T03:04:25.997Z","updated_at":"2016-06-04T03:04:25.997Z"},{"id":2,"title":"second article","body":"second article body","created_at":"2016-06-04T03:04:39.995Z","updated_at":"2016-06-04T03:04:39.995Z"}]%

Great! If you hit that URL in you’re browser you should also see the JSON response. If you don’t see the response, make sure you’re passing the correct credentials when getting the token

Your final controller should look like this:

class ArticlesController < ApplicationController
  def index
    uri = URI.parse('http://localhost:3000/user_token')
    req =, initheader = {'Content-Type' =>'application/json'})
    req.body = { auth: {email: '', password: 'securepassword'}}.to_json
    res = Net::HTTP.start(uri.hostname, uri.port) do |http|

    jwt_token = JSON.parse(res.body)['jwt']

    uri = URI.parse("http://localhost:3000/articles")
    Net::HTTP.start(, uri.port) do |http|
      request = uri
      request.add_field("Authorization", "Bearer #{jwt_token}")
      response = http.request request
      render json: JSON.parse(response.body)


In case you wonder, YES! You should refactor this code into a service or whatever thing you use for reusing code and avoiding big methods.

And that’s it! A very light and easy to implement mechanism for communicating your Rails API services (or microservices).

Thanks for reading.

Automatic Log into ECS Container Instances

The title of this this post is kind of ambiguous. I didn’t know how else to put it. I’ll describe it better telling you about the problem we had.

We have an ECS cluster with several container instances running different services. If you have played before with Amazon ECS you’ll know that’s pretty difficult to ssh inside of your applications in a fast way. A lot of people say that is a bad practice to do this, but at the end, when you’re running applications in production environments, eventually you are going to need to access the application. Maybe for inspecting a log, for running a task or a command, etc.

Amazon ECS uses the concept of services and tasks for running applications. The container is going to be launch via a task which is going to be managed and scheduled by a service. Since ECS wraps the docker container with the elements of its own architecture, it can be difficult to find your tasks and log into your container. The good thing is you have access to the AWS API (which is a fantastic API IMO) and using a short script it’s possible to find the instance that’s running some task by passing the service name.

In this case I’m using the ruby version of the API:

#!/usr/bin/env ruby

require 'rubygems'
require 'bundler/setup'
require 'aws-sdk'

# pass the service name as the only argument
service_name = ARGV[0]

  region: 'YOURREGION',
  credentials:'YOURKEY', 'YOURSECRET'),

# we'll need to use both the ecs and ec2 apis

# first we get the ARN of the task managed by the service
task_arn = ecs.list_tasks({cluster: 'MYCLUSTER', desired_status: 'RUNNING', service_name: service_name}).task_arns[0]

# using the ARN of the task, we can get the ARN of the container instance where its being deployed
container_instance_arn = ecs.describe_tasks({cluster: 'MYCLUSTER', tasks: [task_arn]}).tasks[0].container_instance_arn

# with the instance ARN let's grab the intance id
ec2_instance_id = ecs.describe_container_instances({cluster: 'MYCLUSTER', container_instances: [container_instance_arn]}).container_instances[0].ec2_instance_id

# we need to describe the instance with this id using the ec2 api
instance = ec2.describe_instances({instance_ids: [ec2_instance_id]}).reservations[0].instances[0]

# finally we can get the name of the instance we need to log in
name ={|n| n['key'] == 'Name'}[0].value

# ssh into the machine
exec "ssh #{name}"

Now you can run ./myscript service-name and you’ll be automatically logged into the container instance that’s running your task. Then you can run docker ps to get the container id and finally docker exec -it CONTAINER_ID bash to log into the container. Much faster than going to the ECS web console or running docker ps in all your cluster instances until you find the one that has the task you’re looking for.

I’m not sure if there’s a better way of doing this but it works for my use case. For the automatic login, you’ll need to have an alias for each instance in your ssh config file:

Host name-1
HostName XX.XX.XX.XX
User ec2-user
IdentityFile /path/to/my/pem/file

Host name-1
HostName XX.XX.XX.XX
User ec2-user
IdentityFile /path/to/my/pem/file

This way if the script find the host named name-1, it can run ssh name-1 and then you can log into your container.

If you’re interested in learn more about Rails and Amazon ECS, I’m writing a book that covers all the essential parts in the deployment process.

That’s it, thanks for reading!

Caching API requests in Rails

I’m currently working on a project that makes a lot of calls to external APIs such as Google Analytics, Mailchimp, Google’s double click for publishers, etc. And a couple of internal API calls. All of this for generating reports for our commercial team.

Generating these reports can take up to 2-3 minutes, so it would be nice to have some kind of cache mechanism. This way if somebody wants to revisit the report later, it’s not going to take that amount of time for the report being displayed.

The library that I’m using for making the requests it’s called Typhoeus. It’s pretty cool because it wraps all of the dirty and difficult to remember methods from the lower level HTTP libraries in a very clean DSL. And a very pleasant thing that I discovered is that it includes build in support for caching.

Suppose we have this method that calls some external API:

def my_method

Every time you call this method, you’re going to be hitting that endpoint. Now, with Typhoeus you can declare a cache class and pass that class to the cache configuration using an initializer:

class Cache
  def get(request)

  def set(request, response)
    Rails.cache.write(request, response)

Typhoeus::Config.cache =

Remember that if you want to test this in development mode, you must have this line in your config/environments/development.rb:

config.action_controller.perform_caching = true

And that’s it. Now the first time you call some endpoint using Typhoeus, the result will be cached and will be served by the cache system that you’re using in your Rails application.

One thing that’s not very clear in the Typhoeus documentation, is how to pass options to the Cache class methods. In my case I needed an expiration time. After some research I found out that is as simple as passing the options as a third argument to the method, so in my case it would be:

def set(request, response)
  Rails.cache.write(request, response, expires_in: 3.hours)

Lastly, remember that all of the responses will be cached, even the bad ones. So if your endpoint responds with an error, you’ll have to clear the cache. Remember that is not a good practice to parse requests using the JSON library without checking if the response is correct first. Or else you’ll ended up having very ugly errors.

Thanks for reading!

Rails development with Docker and Vagrant

I’m developing Rails applications most of my time, so I’ve been trying to create a flexible and comfortable development environment that can be easily reproduced in production. This means that I want to start to develop right away using a real production app server and a real database, so no webrick or sqlite in this post, just real useful stuff.


I’m going to show you how to setup a Rails environment using Nginx and Passenger for serving your application, and MySQL for your data. I know a lot of people prefer postgresql but the setup is pretty similar (I’m using MySQL for work related reasons).

We will use Docker inside Vagrant. I think this approach is more flexible and universal that using just Docker since that can generate inconsistencies between workspaces using boot2docker in OS X (like me) and workspaces using Linux distributions as host machine. Besides, Vagrant gives us native docker provisioning which can reduce a lot of Docker typing.

Note: I know about tools like Docker compose but since it’s still not suitable for production, I prefer to use just native Docker commands for linking and running my containers.


Create a Rails applicacion

We’re going to start with a fresh Rails application. So in your local machine create a new applicacion and select MySQL as the database.

rails new myapp -d mysql

Dockerfile for the application

We can use the official Passenger image for getting a crafted environment configured by the official phusion team. Following the instructions from the repository, you get a very minimal Dockerfile.

FROM phusion/passenger-ruby22:0.9.15

# Set correct environment variables.
ENV HOME /root

# Use baseimage-docker's init process.
CMD ["/sbin/my_init"]

# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

# Active nginx
RUN rm -f /etc/service/nginx/down

# Copy the nginx template for configuration and preserve environment variables
RUN rm /etc/nginx/sites-enabled/default
ADD myapp.conf /etc/nginx/sites-enabled/myapp.conf
ADD mysql-env.conf /etc/nginx/main.d/mysql-env.conf

# Create the folder for the project and set the workdir
RUN mkdir /home/app/myapp
WORKDIR /home/app/myapp

# Copy the project inside the container and run bundle install
COPY Gemfile /home/app/myapp/
COPY Gemfile.lock /home/app/myapp/
RUN bundle install
COPY . /home/app/myapp

# Set permissions for the passenger user for this app
RUN chown -R app:app /home/app/myapp

# Expose the port

The myapp.conf is just a basic nginx configuration for serving the application:

server {
    listen 80;
    root /home/app/myapp/public;

    passenger_enabled on;
    passenger_user app;

    passenger_ruby /usr/bin/ruby2.2;

And the mysql-env.conf file is necessary for preserving the environment variables passed from Docker to passenger. You can find more info about this in the image repository. In this case we just need the variables comming from the MySQL container that we will be linking with our app. If you need to pass more enviroment variables, just put them in this file


Put this files in the root of your application (Dockerfile, myapp.conf and mysql-env.conf).

Vagrant stuff

For Vagrant, create a new folder in your application and initialize it with a fresh Vagrantfile

mkdir vagrant
cd vagrant
vagrant init

Replace the generated Vagrantfile with the following configuration:

# -*- mode: ruby -*-

Vagrant.configure(2) do |config|     = "trusty"

  config.vm.box_url = "" "forwarded_port", guest: 80, host: 8080 "private_network", ip: ""

  config.vm.synced_folder "../", "/myapp", :mount_options => ["uid=9999,gid=9999"]

  config.vm.provider "virtualbox" do |vb|
    vb.memory = "2048"
  config.vm.provision "docker" do |d|
    d.pull_images "mysql:5.7"

    d.build_image "/myapp", args: "-t myapp" "mysql:5.7",
      auto_assign_name: false,
      daemonize: true,
      args: "--name myapp-db -e MYSQL_ROOT_PASSWORD=myapp" "myapp",
      auto_assign_name: false,
      daemonize: true,
      args: "--name myapp -p 80:80 --link myapp-db:mysql -e PASSENGER_APP_ENV=development -v '/myapp:/home/app/myapp'"


lets analyze this file. = "trusty"

config.vm.box_url = "" "forwarded_port", guest: 80, host: 8080 "private_network", ip: ""

This is just regular Vagrant stuff, we’re fetching the trusty image for Ubuntu, forwarding ports to our host machine and setting a private network in order to access our running applicacion using our host machine browser.

  config.vm.synced_folder "../", "/myapp", :mount_options => ["uid=9999,gid=9999"]

This line is important. We’re sharing our application folder, but in order to not messed up the permissions for the passenger user (with uid 9999) we have to set permissions for the mounted folder.

  config.vm.provision "docker" do |d|
    d.pull_images "mysql:5.7"

    d.build_image "/myapp", args: "-t myapp" "mysql:5.7",
      auto_assign_name: false,
      daemonize: true,
      args: "--name myapp-db -e MYSQL_ROOT_PASSWORD=myapp" "myapp",
      auto_assign_name: false,
      daemonize: true,
      args: "--name myapp -p 80:80 --link myapp-db:mysql -e PASSENGER_APP_ENV=development -v '/myapp:/home/app/myapp'"

This section is where the magic happens. Using the Docker provisioning we can automate several stuff (I’m using Vagrant 1.7.2 in case you wonder).

Firt, we tell vagrant that we want to pull the MySQL image from the Docker registry in order to be available right away after a provisioning. Next, we’re telling Vagrant that we have a local image in our shared folder and we want to build it and call it “myapp”. This way Vagrant is going to look for a Dockerfile in that folder and execute a Docker build using the provided args. Pretty neat.

The following two segments are necessary for running the previously pulled and built images. The MySQL image is being runned in a very standard way.

For our “myapp” application we need to expose the port 80 from the container to the host, create a Docker link with the mysql container, set the passenger environment variable, and mounting a volume for working locally and not have to rebuild the image every time we make changes in the code.

The last thing we need to do is change the MySQL configuration in our config/databases.yml file.

default: &default
  adapter: mysql2
  encoding: utf8
  pool: 5
  username: root
  host: <%= ENV['MYSQL_PORT_3306_TCP_ADDR'] %>

  <<: *default
  database: myapp_development

  <<: *default
  database: myapp_test

  <<: *default
  database: myapp_production

Here we’re using the environment variables that the MySQL container shared with the Rails application container.

Running all the stuff

Now that’s all in place, we can run

cd vagrant
vagrant up

and wait until the command is finished.


In order to verify that nothing went wrong, we can go to the VM IP ( and check if the Rails application is running.

The first time you should see this error:


Easy to fix, we just have to execute a rake db:create command inside the passenger container. Remember that we named it ‘myapp’:

vagrant ssh
cd /myapp
docker exec -it myapp rake db:create

Now if you visit the ip you should see the classical Rails welcome. Great! welcome


I’m not sure if there’s a convention about how to work with Rails and containers yet. But in my case I haven’t had problems using the shared folders and running the Rails and rake commands against the container.

For example, if you want to scaffold something, you can do something like this:

vagrant ssh
cd /myapp
docker exec -it myapp rails g scaffold posts title body:text
docker exec -it myapp rake db:migrate

Then if you visit the VM ip in the /posts route, you’ll see your scaffold running as usual. The data is connected to the MySQL database container


One important detail is that Vagrant is going to run your container only during provisioning. If you run a vagrant halt and then just vagrant up, your images are still going to be there, but they are not going to be running.

In my case it’s fine run vagrant up –provision every time, since pulling the images is going to be super fast thanks to the Docker cache.

The beauty of all this setup, is that if you want to deploy your application in production, you just need a machine with Docker installed and run your containers, and you can be pretty sure that is going to work in the same way as in your development environment.

In future posts I’ll talk more about what I’ve learn about deploying and manage your containers in different nodes of a cluster in production.

Thanks for reading!.

Vim On Rails

I’m going to talk about some of the plugins and configuration that I use every day at work, which is mostly developing Ruby on Rails applications.

In my case there are some elements that make a big difference when using a text editor:

  • Switching between files and directories
  • VCS support (git of course)
  • Movement inside of a file
  • Shortcuts that make my life easy

Of course some of these elements can be irrelevant for other users. But I think that any Rails developer could gain a lot using an editor with good support for all of these features.

So, let’s go to the good stuff.

Switching between files and directories

There’s this great plugin written by Tim Pope called vim-rails that adds a lot of sweet commands to Vim.

You can use :Econtroller to navigate controllers, :Emodel to go to some model, :Eview for views, :Emailer to…well, you get the idea.

The best thing about this commands is you can use tab for autocompletion. You can also use reduced versions for some of them. For example :Econtroller comes aliased as :Eco and :Emodel as :Emo.


Other cool Ecommands that I use a lot, are :Emigration and :Einitializer. :Emigration takes you to the latest migration. Super useful if you use the Rails migration generator and want to verify if everything it’s OK with the last generated migration. :Einitializer takes you to the routes.rb file. All experienced Rails developer knows that he’s going to make a lot of visits to that file.


If you are into testing (and I hope you are), you need to start using the :Alternate command. It takes you to the related file of the current file. Personally I just use it to go from Model/Controller to the corresponding test file. The short version is :A. So if you are in the User model and execute the :A command, it takes you to the User spec. This command is highly customizable, but as I mentioned before, I’ve been using it just to navigate to the specs and viceversa.


Explorer type navigation

If you’re used to file navigation using a tree-type explorer, try vim-vinegar. In this post there’s a great explanation about why vim-vinegar is superior to NerdTree. Basically you can turn any buffer into a file explorer. This way you never get confused about which buffer is going to be replaced when selecting some file in the explorer and you have split windows.

Vim-vinegar can be used to do all standard operations like creating, deleting and moving files. I’ve been using this plugin a lot and I can tell you that has made a huge difference in my workflow.


Comments and ends

Two small plugins that are going to save you some time are vim-commentary and vim-endwise. The first one is a solid implementation of a line commenting plugin and the second adds end statements after declaring some method definition or a block.


VCS support

This section only covers Git Version Control, because come on.

Vim has probably the best Git wrapper of all text editors out there. vim-fugitive if one of those things that with the time becomes an indispensable tool.

With vim-fugitive you can commit, add, pull, push, rebase, blame, diff, etc, without leaving Vim. It makes a heavy use of helper buffers to facilitate the interaction with complex commands.

In this picture you can see a typical basic git workflow using some extra mappings.


Fuzzy finding

I’m not a big fan of global fuzzy finders. It’s way more useful to have a quick finder for your opened buffers. Still, a global fuzzy finder can be convenient when you’re in a project with a structure that you’re not used to.

CtrlP is the de facto fuzzy finder for vim. I’ve tried other new plugins for a while (for example Unite.vim) but CtrlP is much more stable and less buggy.

Some of my configuration:

map <Leader>b :CtrlPBuffer<cr>
let g:ctrlp_match_window_bottom   = 0
let g:ctrlp_match_window_reversed = 0

This way you can search quickly in your buffer list using your leader and b. The other two lines are just personal preferences (window position and file order on the list).


So, this is it for now. I hope you enjoyed this post. Try to add some of this tips to your Rails workflow with Vim or maybe put your actual editor aside and give Vim a try :).