Server side readability with node.js
For people just looking for code: node-readability on github
Readability by Arc90 is a fantastic javascript tool that makes web pages easier and more enjoyable to read. It removes the clutter around the article content and applies legible and beautiful styles. Apple has incorporated it into Safari Reader. Many other apps have integrated readability, too.
While it's fast and pleasant on a modern desktop browser, the performance on resource limited devices is still unsatisfactory. It often takes several seconds to process a page on my iPad. On Nexus One, the time is much longer.
Several efforts have been made to port the readability algorithm to server side. Including:
- Ruby ruby-readability
- Python python-readability hn.py
- C# nreadability
- PHP php-readability
It would be nice if we could run readability.js in a server side javascript host environment so that
- The result is as close to that in browsers as possible
- Minimal changes are required
So I took a stab to adapt readability.js to server side using node.js and jsdom. The code is available on github and there is a live demo (coming soon). The result is quite good in my testing except it's a bit slow.
Here is an example usage
var fs = require('fs'), readability = require('./lib/readability.js'); var html = fs.readFileSync('test/nytime.html', 'utf-8'); // This is an very early example. The API is subject to change. readability.parse(html, 'http://www.example.com/somepage.html', function(result) { console.log(result.title, result.content); });
Porting readability.js to node.js
There isn't a full browser environment available for node.js.
Features that can not easily made to work are disabled for now. e.g. Fetching next pages, iframe support.
Another reason for disabling the two features is to keep the code synchronized. node.js is a single threaded event driven execution environment. There is nothing like locking. But readability is written as a singleton one shot object. So I have to reset states before before every run. If the code stops halfway to wait for IO, it might be re-entered before the current run finishes, which surely must be avoided.
Some code paths are disabled because they don't make sense in a non-browser environment but may cause problems.
Some NodeList iteration loops are slightly modified to work around a jsdom limitation where a live NodeList isn't updated automatically after DOM changes when accessed via indexing.
Readability looks for comma(, ) when calculating scores. I've extracted hard coded literals to a variable so that it can be configured to match punctuations in multiple languages. var reComma = /[\uff0c,]/; //chinese comma, too
Readability UI elements, header and footer aren't included in the result. This is merely done to allow more flexible usage. I'd like to include acknowledgement to Arc90 in any final product and suggest all of you do the same.
Most time is spent for performance optimization. See below.
Performance
The first working version was incredibly slow. It was common to take 5-10 seconds to process a moderately sized page. Certain pages can take minutes as if the process is freezing. While node.js uses the very fast V8 javascript engine, the DOM implemented in jsdom uses pure javascript and isn't optimized for performance yet.
I added simple profiling code so that I can see how much time is taken by each step and find code paths worth optimizing most. Below is a sample output for http://en.wikipedia.org/wiki/Ruby
19 Nov 20:57:32 - ---DOM created 19 Nov 20:57:32 - 0 seconds [Remove all stylesheets] 19 Nov 20:57:32 - 0 seconds [Turn all double br's into p's] 19 Nov 20:57:32 - 0.05 seconds [prepDocument] 19 Nov 20:57:33 - 0.455 seconds [grabArticle nodePrepping] 19 Nov 20:57:33 - 0.015 seconds [grabArticle calculate scores] 19 Nov 20:57:33 - 0.227 seconds [grabArticle find top candidate] 19 Nov 20:57:33 - 0.033 seconds [grabArticle look through its siblings] 19 Nov 20:57:34 - 0.043 seconds [cleanConditionally] 19 Nov 20:57:34 - 0.2 seconds [cleanConditionally] 19 Nov 20:57:34 - 0.032 seconds [cleanConditionally] 19 Nov 20:57:34 - 0.054 seconds [cleanConditionally] 19 Nov 20:57:34 - 0.026 seconds [prepArticle Remove extra paragraphs] 19 Nov 20:57:34 - 0.206 seconds [prepArticle innerHTML replacement] 19 Nov 20:57:34 - 1.372 seconds [prepArticle] 19 Nov 20:57:34 - 2.407 seconds [grabArticle] 19 Nov 20:57:34 - 2.53 seconds [================= TOTAL] Profiling summary ========================== 1 2.530 ================= TOTAL 1 0.050 prepDocument 1 0.000 Remove all stylesheets 1 0.000 Turn all double br's into p's 1 2.407 grabArticle 1 0.455 grabArticle nodePrepping 1 0.015 grabArticle calculate scores 2338 0.071 getInnerText 1 0.227 grabArticle find top candidate 105 0.259 getLinkDensity 1 0.033 grabArticle look through its siblings 1 1.372 prepArticle 4 0.329 cleanConditionally 1 0.026 prepArticle Remove extra paragraphs 1 0.206 prepArticle innerHTML replacement
As shown in the summary, getInnerText is called many times, that's actually one function that I made hundredfold faster, cutting the running time by seconds.
element.textContent is rather slow in jsdom, so I replaced it with a tree walker.
function TextWalker(node, func) { function walk(cur) { var children, len, i; if (cur.nodeType == 3) { func(cur); return; } else if (cur.nodeType != 1) { return; } children = cur.childNodes; for (i = 0, len = children.length; i < len; i++) { walk(children[i]); } } walk(node); } var textContent = ''; TextWalker(e, function(cur) { textContent += cur.nodeValue; });
Tree walkers like above are also used in other places to speed up NodeList iteration. As we know, the getElementsByTagName()
function family return a live NodeList which is updated automatically when the DOM changes. Live NoteLists are very fast in most browsers because of highly efficient caching. That's why getElementsByTagName()
is much faster than querySelectorAll()
.
But in jsdom, things are quite opposite. Keeping live NodeLists up to date is very expensive in jsdom because there is no caching at all. In a tight loop that modifies DOM, live NodeLists are just unaffordable. Update: There is simple version number based caching now but a tree walker is still much faster.
So a carefully crafted tree walker is used to replace live NodeList in the "node prepping" part in grabArticle()
. This optimization is significant, reducing running time for certain pages from several minutes to seconds.
So far so fast - 1.1 seconds per page
These optimizations turned out to be very effective. In my testing of 140 pages with an average size of 58KB collected from digg, delicious and hacker news, the average time taken for each page is about 1.1 seconds on a Mac Mini (2.4G Intel Core 2 Duo).
The task is CPU bound. The running time is often not linear to DOM size. DOM content and structure can greatly impact performance. The slowest case is when readability fails to extract enough content so it reruns the algorithm with more aggressive configurations. I believe that with more specific tuning for jsdom, the running time can be further reduced.
Limitations
While the port gives good result for most pages, Node.js + jsdom isn't a full browser environment so the missing features impose limitations.
Server side DOM environment doesn't understand CSS. It is reasonable for readability to make use of CSS information. For example, it could discard invisible or too small elements. Although currently readability hardly uses any CSS information, this would be an important limitation in future.
The next step
Coming from the world of servers and back end systems, I care about performance most. I'm going study jsdom more closely to understand its performance characters better. For now, some options in my mind include
- Replace live NodeList iteration with DOM tree transversal when applicable.
- Combine DOM transversals. Do several things in one go.
- Avoid large innerHTML assignment when possible. HTML parsing and DOM creation are expensive.
Regarding readability.js itself. I'd like to suggest a few improvements. I'd be happy if I can contribute.
- Separate the library part from the bookmarklet part. The core algorithms can be extracted as a library. This allows the core function to be used with different front ends. e.g. A browser extension, a widget.
- Organize the core into several modules and break big functions into smaller ones.
- Add some hooks and more configurations so that it's possible to do page specific optimization in a plug-in manner.
- Currently, readability is a singleton one shot object. It'll be nice to make it classy so that state management will be easier in an event driven architecture.
- Be unobtrusive. It would be nice if the original DOM can be left intact.
At the end of this post, I'd like to thank Arc90 for building such a wonderful tool and Elijah Insua (@tmpvar), whose jsdom has opened many seriously cool possibilities for server side javascipt.