A Most Perfect Union: Just-In-Time Compilers

vaidehijoshi

Vaidehi Joshi

Posted on December 31, 2017

A Most Perfect Union: Just-In-Time Compilers

A most perfect union: just-in-time compilers.

The world of computer science always seems to come down to tradeoffs. Sometimes, we’re forced to choose between two data structures, algorithms, or functions that could both get the job done, but are very different in nature. At the end of the day, the thing that we’re really choosing between which things we care about and value the most — and which things we’re willing to sacrifice.

But it’s not just computer science that this truth applies to; it’s all of computing. Even if we’re not directly working with computer science concepts, we still have to make choices and weigh tradeoffs when it comes to our code and how we write it. On an broader level, we have to consider the pros and cons of different technologies, design decisions, and implementation strategies, too. Again: more tradeoffs!

But tradeoffs in technology aren’t all bad. Sometimes, they’re exactly what drives us forward. New frameworks and languages are often created just so that developers don’t need to choose between things — in other words, so that the tradeoffs we must choose between don’t have to be so steep in nature. Many technologies aim to make these choices easier and far less painful so that other programmers don’t need to pick between two very different ways of solving a problem. Instead, these new approaches try to take the best of both worlds and find a happy medium, all the while learning from and fusing together concepts that already exist in the world. In the world of computing, this has happened time and time again.

Perhaps the most interesting example of this is the union of the compiler and the interpreter; it combines two powerful technologies, and created something new, which we now know today as the just-in-time compiler.

A rare breed: the compiler-interpreter mix

Last week, we took a deeper look at the compiler and the interpreter, how both of them work, and the ways that allowed one — the compiler — to lead to the creation of the other — the interpreter. As it turns out, the history of the interpreter is intrinsically connected to what came soon afterwards: the just-in-time compiler.

We’ll recall that the interpreter was invented in 1958, by Steve Russell, who was working with a MIT professor at the time, named John McCarthy. McCarthy had written a paper on the List programming language, and Russell had been drawn to working with his professor after reading his paper on the subject.

However, John McCarthy also wrote another paper called “Recursive Functions of Symbolic Expressions and Their Computation by Machine”, which was published in 1960. Although we can’t be entirely sure, this paper appears to contain some of the earliest references to just-in-time compilation.

The IBM 7094 operator’s console, © Wikimedia Foundation

Another early reference to just-in-time compilers appears in 1966, in the manual for the University of Michigan’s Executive System for the IBM 7090. The manual for this particular machine’s system explains how it is possible to both translate and load code while executing it, a clue that just-in-time compilers were already starting to be implemented on a more practical level by the mid 60’s!

Okay, but hang on a second — what exactly did that manual mean? We’ve looked at when and where the just-in-time compiler first showed up…but what even is a just-in-time compiler to begin with?

Well, a simple way to think about it is this: the just-in-time compiler (or JIT for short) is the child of it’s parents, the compiler and the interpreter.

Just-in-time compilers: a definition.

The JIT is a fusion or combination of the interpreter and the compiler, which are each two types of translators in their own right. A just-in-time compiler has many of the benefits of both of these two translation techniques, all rolled up into one.

We’ll recall that both the compiler and the interpreter do the work of translating a programmer’s source code into executable machine code, either by translating it in one shot (compiler), or by interpreting and running the code line-by-line (interpreter).

A compiler can act as a great translator because it makes code fast to execute; however, it has to translate all of the source code into a binary file first, before it can execute any of it, which can make it painful to debug something from the source when we only have machine code to work with.

On the other hand, the interpreter can directly execute pieces of code during runtime, which means that if something goes wrong, it can maintain the context of where the executed code was called when it is run. However, the interpreter has to retranslate code multiple times, which can make it slow and less efficient.

A JIT compared to its parents: the compiler and the interpreter.

So where does the JIT fit into this? Well, to start off, the JIT acts like one of its parents — namely, it acts like an interpreter at first, executing and re-running code as it is called. However, if the JIT finds code that is called many times and invoked repeatedly, it behaves like its other parent: the compiler.

The JIT acts like an interpreter until it notices that it is doing a bunch of repeated work. At that point, it behaves more like a compiler, and will optimize the repeatedly-called code by compiling it directly. This allows a JIT to pull in the best of both of its “parent” translators — the compiler and the interpreter. While it does begin by interpreting the source text, it does so in a special way. The JIT has to keep a careful watch on the code that it is running inline during interpretation.

A JIT needs to be able to answer the question:

Can I keep interpreting this code directly, or should I just go ahead and compile this so I don’t need to keep repeating the work of translating?

So how does it answer this sometimes difficult question? Well, the JIT keeps a close eye on what’s happening, and monitors or profiles the code that it is executing.

How the JIT initially interprets the code in a source text.

While the JIT is interpreting the code, it simultaneously monitors it. When it notices repeated work, it thinks to itself: “Hey! This is silly. I don’t need to do this unneccesary work. Let me be smart about how I deal with this code.”

Now, this seems great in theory. But how does the JIT know how to answer this question in practice, exactly? Time to find out!

Smoke leads to fire, fire leads to compilation

We know that a JIT has to keep a close eye on the code that it runs. But how exactly does it monitor what’s going on? Well, we might imagine what we would do if we were monitoring something from the outside: we’d probably have a piece of paper or a notepad and mark things as they happened in order to keep track of events as they happen.

The JIT does exactly that. It usually has an internal monitor that “marks” code that seems suspect. For example, if a section of our source code is called a few times, the JIT will make a note of the fact that this code is called often; this is often referred to as “warm” code.

The JIT uses warmth and heat to determine how to optimize our code!

By the same token, if some lines in our source code are run many, many times, the JIT will make a note of it by marking that section as “hot” code. By using these delimiters, the JIT can easily figure out which lines and sections of code could be optimized — in other words, could be compiled rather than interpreted— later on.

Understanding the value and usefulness of “warm” and “hot” code makes a lot more sense with an example. So, let’s take a look at an abstracted version of some source text, which could be in any language, and of any size. For our purposes, we can imagine that this is a very short program that is only 6 lines of code long.

How does the JIT know what to do with a hot line of code, a warm line, and a line that is never even called?

Looking at the illustration shown here, we can see that line 1 is called very, very often. The JIT will recognize pretty quickly that line 1 is “hot” code.

Line 4, on the other hand, is never actually called; perhaps it is setting a variable that is never used, or is a line of code that never ends up being invoked. This is what is sometimes called “dead” code.

Finally, line 5 is sometimes called often, but not nearly as much as line 1. The JIT will recognize that this is “warm” code, and could potentially be optimized in some way.

The JIT needs to consider what it should do with all of these lines of code so that it can figure out what is the best way to optimize. The reason that this consideration needs to be taken into account is that not all optimization is actually good. Depending on the efficiency with which the JIT decides to optimize, the optimization might not actually be all that helpful!

Let’s look at some of these lines to see how exactly the JIT could end up making a poor optimization choice if it isn’t clever enough.

Baseline compilation: a definition.

We’ll start with line 1. In this situation, the code on line 1 is executed very, very often. Let’s say that the JIT notices (monitors) that this line is being repeated often. It will inevitably take that “hot” line of code and decide to compile it.

But the way that the JIT decides to compile this code is just as important as the fact that it is compiling it to begin with.

A JIT can perform different kinds of compilations, some of them quick, and some of them more complex. A quick compilation of code is often a lower-performance optimization, and involves compiling the code and then storing the compiled result without taking too much time. This form of quick optimization is known as baseline optimization.

However, if the JIT chose to do a baseline optimization of line 1, how would that affect the runtime of our code overall? Well, the result of a poor optimization choice on line 1 would result in our runtime increasing and rising linearly (O(n)) as the number of calls to the method on line 1 increased.

Alternatively, the JIT could also perform a longer, more in-depth kind of performance optimization called optimizing compilation , or opt-compiling. Opt-compiling involves spending time up front and investing in optimizing a piece of code by compiling as efficient as possible, and then using the stored value of that optimization.

We can think of baseline compilation versus opt-compiling as two different approaches to editing an essay.

Baseline compilation is a little bit like editing an essay for spelling, punctuation, and grammar; we’re not doing an in-depth improvement of the essay, but we are making a few improvements. On the other hand, opt-compiling is akin to editing an essay for content, clarity, and readability — in addition to spelling and grammar. Opt-compiling takes more up-front work, but leads to a better end result.

The nice thing about opt-compiling is that, once we’ve compiled a section of code in the most optimized way possible, we can store the result of that optimized code and perpetually run that compiled code again and again. This means that no matter how many times we call a method in the section of code that we’ve optimized, it will take constant time to run that code since we’re really just running the same compiled file each time. Even as the number of method calls goes up, the runtime for code execution stays the same; this results in constant time (O(1)) for code that has been opt-compiled.

Optimizing compilation: a definition.

Based on the Big O Notation of opt-compiling alone, it might sound like opt-compiling should always be the way to go! However, there are some instances when opt-compiling can be wasted effort.

For example, what would happen if our JIT went ahead and started opt-compiling everything? We’ll recall that line 4 is never actually called and is “dead” code. If our JIT took the time up front to opt-compile line 4, which is never even run, then it will spend an unnecessary amount of time in order to prematurely optimize a line of code that is never invoked. In this scenario, opt-compiling blindly, without taking a deeper look at what’s actually going on in the code and without relying on the hotness of the code itself ends up being rather wasteful!

So, what’s a JIT compiler to do? Well, ultimately, it needs to find a happy medium between baseline compilation and opt-compiling. This is exactly where the “hotness” of code comes into play.

Finding a happy medium between baseline and opt-compiling.

The JIT uses the “hotness” of a line of code in order to decide not just how important it is for that code to be compiled, but also which strategy — either baseline or opt-compiling—to use when it compiles.

A happy, hot path leads to optimal JIT compiling

We already know that the JIT uses the “hotness” of code in order to decide which kind of compilation strategy to use. But how does it make its decision, exactly?

Combining hotness and optimal compilation strategies!

For code that is neither “hot” nor “warm” — including code that is “dead” — the JIT will behave just like an interpreter, and won’t even bother making any compiler optimizations whatsoever.

But, for code that is “warm” but not “hot”, the JIT will use the quicker, baseline form of compilation during program execution. In other words, as it interprets this code and notices that is is “warm”, it sends it off to be compiled while the code is still being executed. It will compile that “warm” code in a simple way — the quickest, low-performance way possible. This means that it makes a slight improvement because even baseline compilation is better than nothing for “warm” code.

However, for code that is “hot” and called upon frequently, the JIT will make a note of this, and when it is called enough times, it will interrupt program execution (interpretation), and send that code to be opt-compiled — optimized in the best possible way, which also means more time invested in compilation up front. The benefit to this is that the “hot” code only needs to be optimized the one time, even though it is slightly more work to do so. Once the “hot” code has been optimized, the JIT will just keep reusing and rerunning the machine code for the optimized version again and again during runtime, without ever having to need to send it off to be recompiled again and again.

The basic rule of thumb to remember is this:

For code that is not called often, the JIT will use baseline compilation, which is faster. However, for code that is called frequently, the JIT will use the longer opt-compile method, because it knows that it is worth the effort.

The risk of dynamic translation.

Ever so rarely, the JIT will make a call that is incorrect. That is to say, it will determine that some piece of code is called enough to be opt-compiled, but as it turns out, maybe it isn’t! For example, if our JIT looks for lines of code called 5 times before it is opt-compiled, and it sees a line of code that is called 4 times, on the 5th time, it will likely send it off to be opt-compiled. In very rare occurrences, it might so happen that the line of code that it opt-compiled is never ever called again! In which case, all the work it put into compiling that line went to waste.

This is just a part of the story when it comes to dynamic translation , which is what just-in-time-compiliation happens to be. Every so often, the JIT could decide to pre-optimize a piece of code that won’t actually ever be called again. This is pretty rare though, because most lines of code are either called very frequently, or only a handful of times. It’s likely that most modern-day JITs can account for this very well, but it is possible for a JIT to be wrong every once in awhile.

Most of the time, a JIT is pretty good about knowing when it should behave like an interpreter and when it should take a piece of code and compile it. The nice thing about this is that our JIT allows us speed up only the things that need to be sped up. Just-in-time compliation allows us to optimize and compile the code that we run the most often.

Furthermore, it allows us to continue to hold onto the place in our source code where that compiled code was run in the first place! In other words, we can still reference where some compiled code was run.

The benefits of the just-in-time compiler!

For example, in the image above, our JIT determined that function one() is a high “hotness”, and can be opt-compiled to be more efficient. Even though function one() was compiled, we can still reference where that compiled came from in our source text. Effectively, if there are any errors in this compiled code, we now know where exactly it came from in the source text. Since the compilation happens during runtime, we can easily debug any errors or problems, because we know to look at function one() for clues, since the error is coming from the compiled code generated by this particular line.

The just-in-time compiler gives us the benefits of both worlds: it allows us to run fast code that can be optimized and executed via compilation, while still retaining the maintained context from the intepreter, which programmers love to have while debugging.

A JIT gives us the best of the interpreter and the compiler.

The JIT is a perfect example of every once in awhile we get lucky in comnputer science, and don’t have to choose between tradeoffs. Every so often, it turns out we can have our compiler and interpreter our code, too!

Resources

Even though the JIT compiler is implemented within languages that are commonly-used in computing today, it can be hard to find good resources that really explain what they are, how they work, and why they are important. There are, of course, some videos and articles that do a good job of answering these questions, but you have to dig a little bit to find them. Luckily, I did the digging for you! Here are some good places to start if you’re looking for further JIT-related reading.

  1. A crash course in just-in-time (JIT) compilers, Lin Clark
  2. What are Interpreters, Compilers & JIT compilers?, Avelx
  3. Just in Time Compilation, SEPL Goethe University Frankfurt
  4. Understanding JIT compiler (just-in-time compiler), Aboullaite Mohammed
  5. Just in Time Compilation, Professor Louis Croce
  6. A Brief History of Just-In-Time, Professor John Aycock

💖 💪 🙅 🚩
vaidehijoshi
Vaidehi Joshi

Posted on December 31, 2017

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related