This blog has been archived. Our writing has moved to makandra cards.
The blog of , a Ruby on Rails development team

The performance impact of removing jQuery (with pictures)

I focus my open-source efforts on Unpoly, the unobtrusive JavaScript framework for server-side web applications.

In the past 18 months I was able to make signficant boosts to Unpoly's JavaScript performance. Improvements correlated so strongly with the incremental removal of jQuery that I felt compelled to share some pictures.

Step 1: All in jQuery

Here is a flame graph for following a link in Unpoly 0.37 at 6x CPU throttling without network latency. This version used almost every function in jQuery's kit. DOM manipulation, promises and AJAX were all tunneled through jQuery:

Performance trace for Unpoly 0.37.0

You see the green dots on the bottom right? That's when the browser starts painting pixels. Everything before that is just waiting for the JavaScript.

Step 2: Native AJAX and Promises

This is Unpoly 0.50. It uses native APIs for AJAX and Promises, but still tunneled all DOM access through jQuery. Note how average stack depth has decreased significantly, but we didn't gain a meaningful performance boost:

Performance trace for Unpoly 0.50.0

Step 3: Slow functions removed

This is Unpoly 0.56. This version has been optimized as far as I knew how to, without removing jQuery entirely. Although jQuery is still used to manipulate elements, some slow jQuery functions like show(), hide(), css(), addClass() and removeClass() were replaced with their native counterparts:

Performance trace for Unpoly 0.56.0

Note that some of the improvements from 0.50 to 0.56 were realized by optimizing my own code, and were unrelated to jQuery. However, I ran into a glass ceiling here. Everything that could be optimized had been optimized. The only way to become faster was to remove jQuery.

Step 3: jQuery removed

Today I released Unpoly 0.60, which no longer uses jQuery:

Performance trace for Unpoly 0.60.0

It is not my intention to demean jQuery in any way. I've been an avid user for 10 years and I still work with it every week. And while the native DOM API has made great progress, it still can't compete with jQuery's productivity in many areas. But if you want to perform well, especially on mobile devices, you should take a look at where your CPU spends its cycles.

Introductory interview with Michael

Hi there! I'm Michael Leimstädtner, and some of you might already know me although I missed to introduce myself to the blog. I was born and raised in South-Tyrol (which is part of northern Italy) before I found Augsburg as a place to study and settle. In my spare time, I love spending time with friends and once in a while I'm off to some adventurous vacation. But most of the time, I'm not - just as everyone else.

How did you first learn about makandra and what drew you to makandra originally?

My first contact with makandra was in 2015 when they hosted a session of the local Meetup group "Web and Wine". I remember Henning presenting an early draft of the Javascript framework Unpoly. The professionalism of everyone involved might have been the reason for me to come here.

What is your role at makandra?

In 2017 I joined makandra as a Trainee, since then I'm employed as a Software Engineer. You could describe my daily work as crafting new web applications and features while keeping the rust out of existing ones.

What are three words to describe makandra?

Progressive for constantly seeking new technologies and solutions yet too cautious to bet on the wrong horse. And since this knowledge is constantly shared, a third attribute describing makandra could be well organized.

What challenges are you looking forward at makandra?

Most projects developed by makandra are hosted and maintained for many years, and some are growing in complexity for all that time. I'm looking forward to learn how to manage multiple projects over a large period of time.

Do you have a favorite newspaper or blog?

For technical news, I'm reading the Hacker News on a regular basis.

What books did you read last?

Robinson Crusoe. The Kingkiller Chronicle.

What tools are you using that you don't want to miss anymore?

Vimium, Synapse and a good set of key bindings in every other tool.

Motto or personal mantra?

A positive state of mind is going to bring the desired result.

Best vacation you've been to?

Two years ago I went on quite a large journey with two friends of mine. We followed the Danube (Donau) from Augsburg all the way to the Black Sea, by bike.

If you had to eat one meal, every day for the rest of your life, what would it be?

That would most definitely be Semmelknödel in every possible flavor. They are best though when prepared with either some alpine cheese or bacon.

A matter of the heart

Almost everyone at makandra works with Ruby almost every day. We do Ruby on Rails development and Ruby hosting. We also take care of old versions of Rails with Rails LTS.

Ruby is a fun and also mature language, which is now over 20 years old. It was and is constantly developed further: for example Ruby 2.6, which was recently released, supports endless ranges. The same is hold true for Rails, one of the most popular web frameworks for Ruby.

Behind all this flourishing is a lot of work, which is done by passionate people. There is a huge and active open-source community which we are part of*.

But what would a community be without community events? On such events you can meet like-minded people and dive deep(er) into a language, technology or general topic. I found such events to be very inspiring and motivating!

Ruby on Ice is such an event and we are not only supporting it through sponsoring, but also through manpower. Four of my colleagues and I found the idea of a Ruby conference in southern Germany so appealing that we decided to dedicate a lot of our time to make it happen again.

The event will take place on February 22nd - 24th, 2019 in Tegernsee and you still have the chance of participating. We are looking forward to a weekend full of interesting talks, yummy food, beautiful scenery and great people!

*We opened our own knowledge-base, makandra cards which now contains more than 3500 cards (= posts) with tips, tricks, how-tos and tutorials related to all things Ruby, Rails and web development. Also, we maintain several gems which are open source.

Investigating slow PostgreSQL Index Only scans

In July 2018 one of our hosting customers was confronted with slow PostgreSQL queries. Statements usually taking under 1 ms took multiple seconds. Our customer already found the slow statements and started to investigate the issue.

We jumped in later and started the usual procedure to analyse slow PostgreSQL statements, using EXPLAIN ANALYZE. The following output was shown for one of the problematic queries:

production database Primary

Aggregate  (cost=32988.28..32988.29 rows=1 width=8) (actual time=2877.870..2877.870 rows=1 loops=1)
   ->  Index Only Scan using index_shipments_on_state on shipments  (cost=0.43..32905.89 rows=32955 width=0) (actual time=0.048..2877.089 rows=6423 loops=1)
         Index Cond: (state = ANY     ('{init,waiting_for_manual_correction,unresolved,unassigned,price_not_calculatable,dispatching_unconfirmed,dispatching_confirmed,pending_check_out,dispatched,picked_up,undeliverable,passed_on}'::text[]))
         Heap Fetches: 1339084
 Planning time: 0.281 ms
 Execution time: 2877.928 ms
(6 rows)

The problem did not occur when sourcing a production dump on local development machines. A restored PITR backup on our PITR host did not suffer from this problem either:

local database of customer

 Aggregate  (cost=24312.53..24312.54 rows=1 width=8) (actual time=6.948..6.948 rows=1 loops=1)
   ->  Index Only Scan using index_shipments_on_state on shipments  (cost=0.43..24265.94 rows=18637 width=0) (actual time=0.031..5.988 rows=6733 loops=1)
         Index Cond: (state = ANY     ('{init,waiting_for_manual_correction,unresolved,unassigned,price_not_calculatable,dispatching_unconfirmed,dispatching_confirmed,pending_check_out,dispatched,picked_up,undeliverable,passed_on}'::text[]))
         Heap Fetches: 6771
 Planning time: 0.322 ms
 Execution time: 6.999 ms
(6 rows)

We confirmed that the Index Only Scan was used – which should be fast. However, we also saw a high count of Heap Fetches on the production server. This is usually a bad sign and explains why the query was slow. To understand why, we need to understand a few details about how PostgreSQL stores data and handles Index Only Scans. Here's a simple explanation:

Let's imagine a transaction running a read only query on a table. The query runs for a very long time and will return multiple rows of data. If another transaction needs to update rows which are to be returned by the running read query, we don't want to to wait for the read query to finish. On the other hand, we don't want our read query to return inconsistent data. MVCC solves this problem:

  1. To return a consistent state of our database, the running query sees only a snapshot from the data.
  2. If a row gets updated, a new row is created with the updated data.
  3. Now there are two row versions, an old one and a new one.
  4. PostgreSQL keeps track of which row versions should be seen by which transactions.
  5. The old row versions gets marked as “valid until X”
  6. The new row version gets marked as “valid from X”

The read query still running fetches the old row versions while newer queries will get the new row versions. But there is a problem: If this process is continued indefinitely, our database size will grow endlessly. To prevent this, we have VACUUM. VACUUM regularly checks if there are row versions which can be removed. This is the case when there is no transaction left that needs these old row versions. So, when our read query finishes and we end our transaction, VACUUM will remove the old row versions which got updates by our second transaction (and therefore will be replaced). VACUUM doesn't need to check every row version (by the way, tuple in PostgreSQL terminology) every time it runs. It maintains a Visibility Map which (briefly) stores information about tuples visible to all current transactions and all transactions in the future. The map doesn't contain data for each tuple but for all tuples stored in a memory page. So if a memory page is checked by VACUUM for “dead” tuples and all of them are visible to all transactions, it flips the corresponding bit in the Visibility Map. The next time it runs, it doesn't need to check this page again. If an update or delete statement changes a tuple in this page, the bit is flipped back again and next time VACUUM runs, it will check the tuples in this page again. This is just a really short explanation of how this works. If you wish to learn more, we recommend reading the linked PostgreSQL documentation. Emanuel's blogpost about PostgreSQL query optimization can give you deeper understanding too.

What does all this have to do with slow Index Only Scans?

Let's run a query using an Index Only Scan which will return ALL matching row versions. Matching rows of the query must be checked if they should be seen by the current transaction (MVCC). Index Only Scans access the Visibility Map, which gets updated by VACUUM runs. PostgreSQL will check if the page where the returned row version is stored is visible to all active transactions. If it is, we can continue with the next matching row version. If it isn't, we must fetch the table heap (the main data storage area of the table) to check if this row version is visible for the current transaction. This is comparably slow. Thus, a high amount of heap fetches is bad. There are a few explanations for a constantly high (or growing) amount of heap fetches:

  • more updates/deletes in the table than auto VACUUM is capable of scanning / cleaning up
  • no VACUUM for that table
  • VACUUM is not working

We checked for the amount of updates and transactions to the table and didn't see any problematic behaviour. A simple test run of VACUUM on the table showed that it did not change anything about the heap fetches. We also checked via SELECT schemaname,relname,last_autovacuum,last_autoanalyze,autovacuum_count,autoanalyze_count FROM pg_stat_all_tables; if auto VACUUM is executed for the affected table. Finally, a VACUUM VERBOSE lead us to the root of the issue:

DETAIL:  1349534 dead row versions cannot be removed yet.

This means that there were 1349534 old row versions but VACUUM refused to remove them because it thinks they're still needed. The visibility map gets no updates for these rows because these row versions needs to be checked in the next VACUUM run again. This means that our Index Only Scan can not benefit from fast visibility map lookups. We needed to find out the reason why these row versions were not deleted by VACUUM. One reason can be a long running transaction (either active or idle) which still requires very old row versions. So we searched for active or idle transactions:

SELECT pid, usename, application_name, state, xact_start, query from pg_stat_activity where xact_start is not null;

Of course the query returned some transactions but they were just a few seconds old and disappeared shortly afterwards. We did not fully trust this and started digging around with locks, searched for broken VM files and searched again for transactions with old xmin values:

SELECT pid, datname, usename, state, backend_xmin FROM pg_stat_activity WHERE backend_xmin IS NOT NULL ORDER BY age(backend_xmin) DESC;

This took a lot of time but it made apparent that the Replica may be responsible for the issue (even though the problem appeared on the Primary):

Our customer was using the Replica not just for failover scenarios but also to execute expensive read queries on it. These queries ran a long time, so they got canceled when the VACUUM instructions replicated from the Primary to the Replica. The PostgreSQL binary replication works sequentially, i.e. newer changes will be applied only if the changes that came before have been applied successfully. Since we're dealing with binary replication and the Replica needs to represent the same state as the Primary, all changes including the clean-up initiated by VACUUM must be replicated. If a transaction on the Replica uses old row versions which are already deleted on the Primary, it will cause replication conflicts.

By default, PostgreSQL will wait a short time for transactions on the Replica to finish so that it can apply the VACUUM changes afterwards. However, to prevent a long replication lag, it will kill the transaction on the Replica after a configured time period. The parameter name is max_standby_streaming_delay, its default is 30 seconds. We could have increased the time-out value to satisfy the time needed for the queries on the Replica, but as this Replicas main purpose was to be an automatic failover instance this wouldn't be a good idea.

There is an alternative: We decided to enable hot_standby_feedback to prevent this. Put briefly, this parameter enables the PostgreSQL Replica to tell the Primary which row versions are still needed by transactions running on the Replica. We gave it even a bit more disk usage because VACUUM would sometimes be unable to clean-up some old row versions.

With the suspicion that the Replica caused the root of the issue, we checked for long running transactions on the Replica. We did not find any. We even stopped the Replica but the issue still persisted. Just to be sure, we disabled hot_standby_feedback, started a VACUUM VERBOSE and saw that the old row versions got cleaned up.

The query execution time was fine again:

                                                                                                              QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    --
 Aggregate  (cost=332.62..332.63 rows=1 width=8) (actual time=3.351..3.351 rows=1 loops=1)
   ->  Index Only Scan using index_shipments_on_state on shipments  (cost=0.43..316.34 rows=6509 width=0) (actual time=0.046..2.453 rows=6411 loops=1)
         Index Cond: (state = ANY     ('{init,waiting_for_manual_correction,unresolved,unassigned,price_not_calculatable,dispatching_unconfirmed,dispatching_confirmed,pending_check_out,dispatched,picked_up,undeliverable,passed_on}'::text[]))
         Heap Fetches: 476
 Planning time: 0.271 ms
 Execution time: 3.405 ms
(6 rows)

The question, why weeks/months old dead rows were kept, still remains. If hot_standby_feedback is enabled, the Replica will regularly tell the Primary which transaction ID is the oldest one needed. If there is no transaction on the Replica needing any old row versions, the Replica won't tell the Primary what it needs. It's hard to verify our suspicion after the fact, but it's likely that there was a dead replication slot as described here. However, since we can't check that after disabling hot_standby_feedback, there is no way to be sure. At the moment there is no other Replication Slot on the Primary than the active one for hot standby (Replica). Possibly the slot got removed when disabling hot_standby_feedback. Otherwise we have no explanation for the behaviour of hot_standby_feedback.


We failed to notice hot_standby_feedback at an early point of debugging and searched for transactions from clients instead of an issue caused by replication configuration. Even though the parameter shouldn't behave like this, we should have taken it into account at an earlier stage of debugging (e.g. after seeing that there were no long running transactions on the Primary).

Introductory interview with Stefan


Hi, I'm Stefan Langenmaier, I'm originally from the greater area of Augsburg (somewhere between Augsburg, Ulm and Donauwörth). My spare time is spent between sports, cooking and tinkering with software and machines that are supposed to run it (if they want to or not) :). Before joining makandra I worked as a consultant for Search & Analytics in the Solr ecosystem and as a consultant for Free Software with a focus on Java and the Web.

How did you first learn about makandra and what drew you to makandra originally?

I think the first time I heard about makandra was when they started the LTS support for Rails, but this was some time ago. I think I also landed once or twice on makandra cards for obscure web dev problems. But the connection was established through a mutual friend who said I should contact them.

What is your role at makandra?

I will be working as an operations engineer, keeping an eye on the infrastructure.

What are 3 words to describe makandra?

  • open
  • friendly
  • knowledgeable

What challenges are you looking forward at makandra?

For the moment I think the challenge is to get a good grasp of the existing infrastructure. For the future I hope to make contributions to the reliability and performance.

Are you messy or organized?

In my opinion it's best to be and work in an organized state but if you want to find a new idea or approach it has to get messy from time to time… :)

What book did you read last?

Sechs Österreicher unter den ersten fünf. Roman einer Entpiefkenisierung. by Dirk Stermann. It's funny if you like morbid topics.

Do you recall any embarrassing moment at work?

Well there was this one time my boss thought I would try to set the office on fire…

Motto or personal mantra?

Only crazy people don't change their mind. (Not the original quote)

If you had to eat one meal, every day for the rest of your life, what would it be?

A very hard choice, but it would definitely be a Mehlspeise (I'd say this is also an established term), probably Griesschnitten with fruits.

If you could only drink one beer for the rest of your life, what would it be?

Blonde au chardonnay

What did you want to be when growing up?

Monk or architect.

Vim or Emacs? (Be careful here…)

Nano and Eclipse. Why settle for something in between?

What tools are you using that you don't want to miss anymore?

GNU/Linux, Wikipedia, OpenStreetMap

Our address:
makandra GmbH
Werner-von-Siemens-Str. 6
86159 Augsburg
Contact us:
+49 821 58866 180
Commercial register court:
Augsburg Municipal Court
Register number:
HRB 24202
Sales tax identification number:
Chief executive officers:
Henning Koch
Thomas Eisenbarth