This blog has been archived. Our writing has moved to makandra cards.
The blog of , a Ruby on Rails development team

Your Rails 3.2 project should switch to Rails LTS

After many months of work, Rails 5 was released today. Thanks to the almost a thousand committers, this new version brings us exciting new features such as ActionCable or the attributes API.

Unfortunately this also means your trusty Rails 3.2 app is no longer getting new security patches from the open-source community, who now has their hands full maintaining versions 4 and 5.

Rails 3.2 has been on limited maintenance for the past two years. With the Rails 5 release, this support has ended entirely. If your business uses Rails 3.2, you should act now to prevent data breaches and liability.

If your team can't or don't want to upgrade your application now, you can switch to Rails LTS and continue to receive security updates. Rails LTS is a drop-in replacement for the official Ruby on Rails gems.

For more information about Rails LTS visit https://railslts.com/.

Why I'm closing your Github issue

If I just closed your Github issue with a one-line comment, I'm sorry.

I open-source a lot of libraries on Github. This has been a huge source of stress and guilt for me. Users demanding that I fix their edge-case issue, demanding compatibility with an unstable alpha of another library, people demanding that I answer their project-specific question on Stack Overflow, demanding progress if I'm too slow to respond to any of the above.

I've been thinking a lot about how to not burn out over this. Many issue authors expect commercial levels of support for this free product they're using. But that's not something I can give. I open-source my projects so others can debug and fix things without depending on me.

So here's an exhaustive list of how I will be using Github issues from here on out:

  1. Discuss the best way to make a PR

I won't implement your feature, and I won't fix your edge-case bug. What I will do is discuss whether a proposed change is within the scope of the library, and explain the implementation to help your build your PR. Other than that, I just can't help you. I'm sorry.

PostgreSQL query optimization

With nanomize we built an URL Shortener complying strictly with the German Federal Data Protection Act. One core element of nanomize is the analysis dashboard of a customer, helping to understand the user flow behind links.

While the application scales perfect with the current requirements, we want to guarantee the same performance for companies with high traffic, too. This way we learnt a lot about how to tweak a PostgreSQL database with many rows.

Understand the architecture

A user doesn't recognize much about what's going on, when she's redirected from a short URL to the destination. But nanomize stores a bunch of data on this redirect. That's the related link, the referrer url, the extracted country code from the ip address and many more. All in all each redirect causes the creation of one row in a table of nanomize, where analysis can be performed later.

Things become slow

To understand the upcoming bottlenecks, we use a simplified table of nanomize, which we call hits and fill with 100 million rows. The timestamp is distributed uniformly over one year, the link_id and the country_code are chosen randomly (#link_ids = 100 and #country_codes = 250). There are no indices yet, so we can improve performance step by step. An extract from the hits table is shown below.

id link_id country_code created_at
1 1 US 2016-03-07 10:51
2 1 US 2016-03-07 10:52
3 2 UK 2016-03-07 10:53

Furthermore we take care of caching by dropping the buffer cache and the system cache before each query.

service postgresql stop
sync
echo 3 > /proc/sys/vm/drop_caches
service postgresql start

As no indices and triggers exist until now, an INSERT with 171 ms is as fast as possible. That's useful to note and compare with a later constellation again. The next query is a SELECT to count the total number of entries in the hits table.

SELECT count(*) FROM hits;
Time: 64306,54935 ms

It takes over one minute until a result returns. Counting is due to implementation reasons slow in Postgres and there is no trick to speed up execution significantly. Keep this in mind and be sure, that you really need an exact count. Often you just use a wrong implementation (e.g. infinite scrolling) or an estimation is enough. However we want to understand if and how an index could improve counting. Therefore we need to analyse the query.

Postgres execution plan

Postgres uses a planner to analyse a query and to decide what the cheapest way of a execution might be. When executing a query it's possible to output the underlying decisions of the planner with the (EXPLAIN ANALYSE) statement, that combines the estimations of the planner and the costs of the real execution. Mistreating the syntax highlighting for didactic reasons the output of the count query looks like this.

EXPLAIN ANALYSE SELECT count(*) FROM hits;</span>
Aggregate
  (cost=1886943.00..1886943.01 rows=1 width=0)
  (actual time=63654.115..63654.115 rows=1 loops=1)
  ->  Seq Scan on hits
    (cost=0.00..1636943.00 rows=100000000 width=0)
    (actual time=48.896..57394.584 rows=100000000 loops=1)
Planning time: 50.857 ms
Execution time: 63654.274 ms

We read the query from bottom to top. The query itself took 63654.274 ms, whereas the planner has needed 50.857 ms before the real execution could be started. The next parts of the execution plan are separated in blocks and labeled with an action. The first block is a Seq Scan on hits including two additional information surrounded by brackets. For the following it's enough to just understand the first two attributes.

  1. Estimated vs. actual costs: cost=0.00..1636943.00 vs. 48.896..57394.584
  2. Estimated vs. real number of rows output: rows=100000000 vs. rows=100000000

The estimated cost of the planner have no units, whereas the actual results are in milliseconds. The first value describes the elapsed time before the first output begins. The last value measures the costs/time until the execution runs to completion. Most often we are interested in the real execution time, but always take care, that the estimations of the planner return similar values. For the inspected block this means, that the first row appeared within 48.896 ms, but the complete execution took over 57 seconds. After fetching all rows they were counted in the Aggregation block. The aggregation needed to wait for the result of the sequence scan, otherwise it may had started before 57394.584 ms.

Now we have seen the output without an index, so we add an index with the hope to make execution magically faster. Creating a primary key in Postgres adds an index with a unique constraint.

ALTER TABLE hits ADD PRIMARY KEY (id);

Unfortunately running the same query again changes nothing. For further understandings we need to take a closer look at what a sequence scan actual is.

Sequential scan vs. index only scan

Postgres stores every table and index as an array of pages of a fixed size on disc. The easiest ways to find a row with the ID 15 is to walk through every page and look if this page includes a row with the ID 15. If the row with the ID 15 is inside the last page of the array this might be expensive. The described method is called a sequence scan.

Now let's look at another approach called an index scan: If we know, that we are searching for IDs we could store the ID and the page address in a special data structure. Postgres uses for that by default a balanced tree. Searching in a balanced three is very fast and a found tuple can be resolved to a page address, where the complete row could be read.

Furthermore it's important to know that reading many pages in a row is much faster for a hard drive than reading the same pages in a random order, as the hard disk head is moved less. On top of that the cheapest way is an index only scan, which doesn't need to load any row at all, as all required columns are contained in the index.

We go back to our hits table to understand what's said above. Just postpone the count problem and look at a simple select with a condition on the ID, the column we created already an index for. The first thing we do, is to check the different execution times of an index only scan and a sequence scan. Therefore we need to disable some query plans.

EXPLAIN ANALYSE SELECT id FROM hits WHERE id < 10000;
Index Only Scan using hits_pkey on hits
    (cost=0.57..274.59 rows=9487 width=4)
    (actual time=22.931..79.963 rows=9999 loops=1)
   Index Cond: (id < 10000)
   Heap Fetches: 0
 Planning time: 150.086 ms
 Execution time: 80.296 ms

SET enable_indexscan = OFF;
SET enable_bitmapscan = OFF;
EXPLAIN ANALYSE SELECT id FROM hits WHERE id < 10000;
 Seq Scan on hits
    (cost=0.00..2523885.70 rows=9487 width=4)
    (actual time=53.630..131530.808 rows=9999 loops=1)
   Filter: (id < 10000)
   Rows Removed by Filter: 99990001
 Planning time: 15.590 ms
 Execution time: 131531.165 ms

As you can see, there is a big difference between the index only scan (80 ms) and the sequence scan (131531 ms). Now we can compare a normal index scan with a sequential scan. To prevent Postgres using an index only scan we select more columns than the index contains.

EXPLAIN ANALYSE SELECT id, country_code FROM hits WHERE id < 10000;
Index Scan using hits_pkey on hits
  (cost=0.57..398.59 rows=9487 width=7)
  (actual time=0.011..126.566 rows=9999 loops=1)
 Index Cond: (id < 10000)
Planning time: 128.091 ms
Execution time: 126.919 ms

SET enable_indexscan = OFF;
SET enable_bitmapscan = OFF;

EXPLAIN ANALYSE SELECT id, country_code FROM hits WHERE id < 10000;
Seq Scan on hits
  (cost=0.00..2523885.70 rows=9487 width=7)
  (actual time=47.433..131746.804 rows=9999 loops=1)
 Filter: (id < 10000)
 Rows Removed by Filter: 99990001
Planning time: 128.130 ms
Execution time: 131747.157 ms

The index scan is much faster. The next step is to look at the the query by changing the condition to match all rows.

SET enable_seqscan = OFF;
EXPLAIN ANALYSE SELECT id, country_code FROM hits WHERE id > 0;
Index Scan using hits_pkey on hits
  (cost=0.57..4120665.15 rows=99999976 width=7)
  (actual time=0.010..100631.736 rows=100000000 loops=1)
 Index Cond: (id > 0)
Planning time: 127.865 ms
Execution time: 103648.355 ms

SET enable_seqscan = ON;
EXPLAIN ANALYSE SELECT id, country_code FROM hits WHERE id > 0;
Seq Scan on hits
  (cost=0.00..2523885.70 rows=99999976 width=7)
  (actual time=0.344..128026.136 rows=100000000 loops=1)
 Filter: (id > 0)
Planning time: 3.234 ms
Execution time: 131079.683 ms

An index scan is still faster, but the percentage of the difference is far smaller. The reason for that is the already mentioned background of random reads. As the planner chooses the sequence scan over the faster index scan, you need to adjust the query configuration to support the planner making the right decision. We skip this, as it is a own topic itself. Also keep in mind that you should adjust the runtime resources depending on your system to get the best performance.

Back to the count query, we encounter the same estimation problem of the planner as in the query before. The planner prefers a sequence scan instead of the much cheaper index only scan.

SET enable_seqscan = OFF;
EXPLAIN ANALYSE SELECT count(*) FROM hits;
Aggregate
  (cost=2846817.45..2846817.46 rows=1 width=0)
  (actual time=23349.313..23349.313 rows=1 loops=1)
 ->  Index Only Scan using hits_pkey on hits
   (cost=0.57..2596811.61 rows=100002336 width=0)
   (actual time=60.504..18335.820 rows=100000000 loops=1)
       Heap Fetches: 0
Planning time: 142.866 ms
Execution time: 23349.428 ms

The index only scan is with 23349 ms more than twice faster than the sequence scan (63654 ms). But this value is actually a little misleading, as we concealed the heap fetches an index scan needs to made. The number of heap fetches depends on the visibility map, discussed in the next section.

Visibility Map (VM)

Postgres needs to check the rows in an index that were modified by other transactions at the moment the query runs. Therefore it uses a compact visibility map (VM), in which DML operations mark pages before execution as invisible for others. That way an index knows for each page if it needs to visit the page on disk to check the included rows or not. An index only scan is only that fast, if it needs to visit few page tuples, which are called heap fetches. Postgres uses a daemon called autovacuum, which runs periodically in the background and updating the visibility map of a table. Also you can run a VACUUM manually. The planner now can decide to use an index only scan over a sequence scan, as there is no need of many heap fetches.

For our previous count we see that the number of heap fetches is 0, that means the hits table is fully vacuumed. We run a pseudo-manipulating query on each row and check the execution time of the index only scan on ID again.

UPDATE hits SET country_code = country_code;
SET enable_seqscan = OFF;
EXPLAIN ANALYSE SELECT count(*) FROM hits;
Aggregate
   (cost=5044201.45..5044201.46 rows=1 width=0)
   (actual time=602128.776..602128.776 rows=1 loops=1)
 ->  Index Only Scan using hits_pkey on hits
   (cost=0.57..4794195.61 rows=100002336 width=0)
   (actual time=154.424..595217.106 rows=100000000 loops=1)
       Heap Fetches: 199999944
Planning time: 180.733 ms
Execution time: 602129.478 ms

With 602129 ms it's nearly ten times slower than the sequence scan. In conclusion we can say that the execution time of a count with an index depends on the correct estimation of the planner and the number of visible rows in the VM. Let's stop at this point with the count query and move on with bitmap index scans and multicolumn indices.

Bitmap Index Scan and Bitmap Heap Scan

Often you see a bitmap index scan followed by a bitmap heap scan. The output looks like the following.

SET enable_indexscan = OFF;
EXPLAIN ANALYSE SELECT id, country_code FROM hits WHERE id < 10000;
Bitmap Heap Scan on hits
   (cost=425.48..39329.72 rows=10440 width=7)
   (actual time=147.553..908.460 rows=9999 loops=1)
 Recheck Cond: (id < 10000)
 Heap Blocks: exact=66
 ->  Bitmap Index Scan on hits_pkey
   (cost=0.00..422.87 rows=10440 width=0)
   (actual time=81.640..81.640 rows=9999 loops=1)
       Index Cond: (id < 10000)
Planning time: 216.926 ms
Execution time: 909.083 ms

The index doesn't contain all desired columns, therefore they need to be loaded afterwards. While an index scan performs random reads, the bitmap heap scan read the pages in a sequential order. For this a bitmap in physically order is needed. That's what a bitmap index scan returns. It checks a condition on every tuple in an index and saves the result as a compressed bitmap. That way bitmaps from different indices can be combined before loading the corresponding rows from disk.

In the example we can speed up the query by adding an multicolumn index.

CREATE INDEX ON hits(id, country_code);
VACUUM hits;
EXPLAIN ANALYSE SELECT id, country_code FROM hits WHERE id < 10000;
 Index Only Scan using hits_id_country_code_idx on hits  (cost=0.57..229.53 rows=7141 width=7) (actual time=38.125..110.524 rows=9999 loops=1)
   Index Cond: (id < 10000)
   Heap Fetches: 0
 Planning time: 346.084 ms
 Execution time: 110.878 ms

Note that we VACUUM the table again, otherwise the planner won't choose the index only scan. Finally we INSERT a row again and see that with two indices the execution time increased from 171 ms to 332 ms. Many queries in nanomize were complexer to optimize, but the simplified queries needed the same knowledge and strategy.

We still know basic Rails development

Last year we successfully launched the PURA App, a medium-sized project for a medium-sized enterprise. PURA is a commercial cleaning company based in Neusäß, with a catchment area of all southern germany.

Due to its size (the company has about 1400 employees), PURA faced some organizational challenges. Managers easily lost track of which objects needed their support. Heaps of paper checklists were stacking and increasingly hard to manage. PURA’s quality management was depending on the oral reports of object managers, as was their customer satisfaction feedback.

They needed a lightweight solution that would combine all these object management activities into a single system. So we worked out the requirements with them, then designed a solution and implemented the PURA App.

Simple Rails, Powerful Technology

The PURA App is a simple Rails 4 application with four basic models, supplemented by a handful of nested models. Each of the basic models has a standard CRUD. It has authentication and simple authorization. So far a greenfield project.

However, we used some exciting new Gems. katapult took the cumbersome project start burden off our shoulders. After about half an hour of application model definition, katapult implemented the model as basic, yet fully runnable application that could already manage users, objects, visits and even had specs and feature tests. From this raw workpiece we crafted the final application with all its domain-specific twitches and specialties.

A second major innovation was the employment of upjs, a Javascript library that enhances applications with server-generated content. Instead of moving the whole application into the browser, as Angular, Ember & co. do, it brings a clever system of content preloading and substitution, vastly improving the speed and responsiveness of the application.
While still writing a good ol' Rails application with Rails controllers and views, we created an application that is as snappy as previously only pure client-side apps could be. And if a browser has Javascript disabled, up.js will graciously degrade, leaving the user talking to Rails with no effort.

While object managers track cleaning quality and customer satisfaction with their tablets on site, the company's management coordinates and evaluates the customer care from the backoffice. So we added a simple role system with Consul and optimized the application for usage on tablet and desktop, as these are the main interfaces. As a result, the application has a well-structured, lucid interface that adapts to the user's role and the size of the display.

Fully operational

An often-overlooked part of creating a web application is the need to host it when development is done. Our long-standing experience in hosting more than 50 Rails applications convinced PURA to trust us, and their reliance has paid off: In more than half a year they've had no downtime, while any disclosed vulnerabilities in either Rails or Unix got fixed instantly. That's what we call rails complete.

Still down to earth

Over the years, our portfolio has grown into all areas of web development. From simple single-purpose web applications we've expanded to a huge content management ecosystem for Audi, a highly-available Smart TV controller for Pro7, a portfolio of various APIs for another customer, a large statistical data evaluation system and many more.

Compared to these, PURA's challenges were simple. That's why it took us less than two months to build an application that serves their needs just right, helping them to elaborate on their services to their own customers.

Surviving the upgrade pace of Rails

This post is an excerpt from our e-book Growing Rails Applications in Practice, now available through The Pragmatic Programmers and Leanpub.


In contrast to languages such as Java, the Ruby on Rails ecosystem has little tradition of keeping APIs backward compatible. Hence, new versions of Rails and Ruby gems often break existing code, making an upgrade very time intensive.

Not upgrading is not an option either. Once you get too far behind the latest version of Rails you will not receive any further security updates. Given the severity of some of the vulnerabilities that have been disclosed in the past, you probably do not want to expose an unpatched Rails application to the open Internet.

There are products like our own Rails LTS that offer security patches for old versions of Rails. However, if you want to take advantage of the latest features, you will find yourself locked in an eternal rat race of upgrading your application dependencies every few months.

This chapter wants to give some advice for dealing with this sitation.

Gems increase the cost of upgrades

When adding a new gem dependency, consider the cost of upgrading that gem through the lifespan of your application. Will the gem's author still be interested in maintaining the library two years down the road? When push comes to shove, would you be willing to replace that gem or take over maintenance if there is no version compatible with a new version of Rails?

Be aware of different upgrade costs between libraries that provide low-level abstractions and those that offer highly coupled mini frameworks. For instance a library that supplies geographical calculations might not even have a dependency on Rails and is unlikely to ever break during an upgrade. Whereas a gem that dynamically generates an admin backend for your application will almost certainly break after a new Rails release due to its many hooks into the internals of Rails.

Upgrades are when you pay for monkey patches

Monkey patches (or "freedom patches") describe the practice of opening up an existing class from a gem dependency and overriding some method with custom behavior. While this can be a way to quickly move on when encountering a fatal library bug, monkey patches are usually the first thing that break when upgrading Rails.

Be careful when monkey patching the internals of a class that does not belong to you. You will pay for it later.

If you find a bug in a gem that you like, consider forking the gem, committing your fix with a test and creating a pull request to the original author. This way your monkey patch can soon be replaced with the next official version of the gem. It also makes for good karma.

Don't live on the bleeding edge

There is no need to upgrade to a new version of Rails on the day it becomes available. Wait until a major Rails release has matured into a couple of patch levels before upgrading, i.e. don't upgrade to Rails 6.0.0 right away, but wait until Rails 6.0.3 is published.

This also gives authors of your other gem dependencies a chance to update their library to integrate with the new version of Rails.

Our address:
makandra GmbH
Werner-von-Siemens-Str. 6
86159 Augsburg
Germany
Contact us:
+49 821 58866 180
info@makandra.de
Commercial register court:
Augsburg Municipal Court
Register number:
HRB 24202
Sales tax identification number:
DE243555898
Chief executive officers:
Henning Koch
Thomas Eisenbarth