Arm Community
Arm Community
  • Site
  • User
  • Site
  • Search
  • User
Arm Community blogs
Arm Community blogs
Tools, Software and IDEs blog Comparison of JavaScript execution modes of the WebKit browser engine
  • Blogs
  • Mentions
  • Sub-Groups
  • Tags
  • Jump...
  • Cancel
More blogs in Arm Community blogs
  • AI blog

  • Announcements

  • Architectures and Processors blog

  • Automotive blog

  • Embedded and Microcontrollers blog

  • Internet of Things (IoT) blog

  • Laptops and Desktops blog

  • Mobile, Graphics, and Gaming blog

  • Operating Systems blog

  • Servers and Cloud Computing blog

  • SoC Design and Simulation blog

  • Tools, Software and IDEs blog

Tags
Actions
  • RSS
  • More
  • Cancel
Related blog posts
Related forum threads

Comparison of JavaScript execution modes of the WebKit browser engine

Guest Partner Blogger
Guest Partner Blogger
September 11, 2013
3 minute read time.

Nowadays web browsers are among the most widely used software tools. You can find them being used everywhere on devices ranging from phones and tablets to personal computers. The heart of all browsers is a browser engine. We, at the Department of Software Engineering, University of Szeged, Hungary, are long time contributors of a well-known browser engine called WebKit. We have worked on quite a few areas of WebKit including the JavaScript engine, multicore support, graphics and the building and testing environment. In this blog post we show the different execution modes of the JavaScript engine and compare their performance.

JavaScriptCore
The JavaScript execution engine of WebKit is called JavaScriptCore. It is a profiling virtual machine which supports various optimization levels. A higher optimization level offers better runtime performance, but its trade-off is longer compilation time. Each optimization level corresponds to an execution mode. At the moment JavaScriptCore has three execution modes. The basic execution mode is the interpreted execution model provided by the LLInt (Low Level Interpreter) component. The next level is translating the JavaScript source code into native code by the JIT (Just-in-time) compiler. The last execution mode also employs a JIT compiler, but it performs costly optimizations on the JavaScript source code before it is translated to native code.

LLInt replaced the old C++ interpreter in February of 2012. This new approach provides better interaction between the interpreter and the native code in a mixed environment. However, LLInt requires CPU-specific backends for each architecture (such as ARM). If a backend is not available, it falls back to a C++ based implementation called CLoop. This mode is only recommended as a last resort, since it is not compatible with the further execution modes.

When the invocation of a JavaScript function reaches a certain threshold, the function is translated to machine code by the JIT compiler. This machine code will be executed next time that the function is invoked.

When a given function is executed a large number of times, the function is recompiled by the DFG-JIT (Data Flow Graph JIT) compiler. This compiler performs aggressive optimizations based on the profiling data collected during the previous executions of the function.

All of these execution modes are supported by the ARM port of JavaScriptCore. We have been deeply involved in the development of this port and we are responsible for the ARM-Linux support of JavaScriptCore. During the last couple of months we finished the support of LLInt and DFG-JIT on the ARM instruction set, and below we present a comparison of them on both ARM and Thumb2 instruction sets.

The following figures show the comparison of these execution mode combinations. CLoop and LLInt are pure interpreted modes, while JIT represents a mode where all JavaScript source are compiled to native code. All the rest are mixed modes which combines LLInt, JIT and DFG-JIT. We measured these execution modes on three well known benchmark sets: SunSpider, V8, and WindScorpion. Instead of absolute execution times, the relative speedups are shown compared to ARM version of CLoop; of course the higher values mark better results.


As we can see on these figures, enabling more sophisticated optimizations improves the execution on all benchmark sets, although the speedup is quite different. The highest jump can be seen on the V8 benchmark set, where the last two modes are almost 10 times faster than the reference. This is not surprising since the execution speed on the V8 benchmark was in focus of recent JavaScriptCore developments.

Regarding the two ARM instruction sets, neither of them is absolutely faster than the other, although Thumb-2 has a slight advantage in general. Therefore both alternatives are reasonable choices for any devices.

We used the Qt port of WebKit (r146983) to perform these measurements on an Odroid-X2 (ARM CortexTM-A9 Quad Core 1.7Ghz 2GB memory) board with Ubuntu Linaro 12.11 system.

Guest Blogger:
 Gabor Rapcsanyi, Developer - University of Szeged, is a Developer at the Software Engineering Department in the University of Szeged, Hungary. He is a contributor of WebKit open source browser engine (commiter status). He holds an MSc in Computer Science.

Anonymous
Tools, Software and IDEs blog
  • Python on Arm: 2025 Update

    Diego Russo
    Diego Russo
    Python powers applications across Machine Learning (ML), automation, data science, DevOps, web development, and developer tooling.
    • August 21, 2025
  • Product update: Arm Development Studio 2025.0 now available

    Stephen Theobald
    Stephen Theobald
    Arm Development Studio 2025.0 now available with Arm Toolchain for Embedded Professional.
    • July 18, 2025
  • GCC 15: Continuously Improving

    Tamar Christina
    Tamar Christina
    GCC 15 brings major Arm optimizations: enhanced vectorization, FP8 support, Neoverse tuning, and 3–5% performance gains on SPEC CPU 2017.
    • June 26, 2025