Pete Freitag
Posted on April 12, 2022
It is rare that a simple JVM argument change can have a dramatic impact on execution times, but in the case of AWS Lambda adjusting the Tiered Complication settings can have a really big impact on performance in many (but not all) cases.
The change I made was to add the JVM arguments:
-XX:+TieredCompilation -XX:TieredStopAtLevel=1
On AWS Lambda you can do this by simply adding the environment variable:
JAVA_TOOL_OPTIONS = -XX:+TieredCompilation -XX:TieredStopAtLevel=1
This chart below shows the average execution time of a Java (more specifically written in CFML and using FuseLess) Lambda Function over the course of two days. I've annotated where I made the change and you can see average execution times cut in half, cold start times also cut in half (though the chart doesn't show that).
Note that this particular java lambda function that I used for testing does have some highly intensive invocations that might take up to 30 seconds to execute, after the change I didn't see any executions going above 15 seconds.
What is TieredCompilation
and what is TieredStopAtLevel
?
That graph is probably leading you to wonder what is TieredCompilation
in java. As you may know the JVM translates your bytecode (class files) into machine code at runtime. This step can take a lot of resources if the compiler wants to make it optimal, it might add counters so it can figure out where the hot spots are and then do further optimizations. This works great and leads to awesome performance on the JVM when your server is running for a long time. In a way the JVM self tunes itself over time, at runtime.
When you are running on a serverless environment like AWS Lambda your runtime will only survive for a short period of time (as little as a few minutes, but no more than a few hours) before being destroyed. The assumption that the JVM default configuration makes is servers will run for a long time, and it should optimize the parts of the code that run the most over time. By setting the TieredStopAtLevel
flag to 1 we are telling the JVM not to insert profiling code that is used for runtime measurements and optimizations. In essence we are telling the JVM, don't get try to be too smart, just execute the code, and in the process we are saving CPU cycles and yielding faster execution.
I learned about this technique from AWS, so it makes me wonder why don't they just enable it by default on all Java Lambda Functions? I think all Java Lambda functions could see a lower cold start time with these flags set, but possibly not a lower overall average execution time like I saw. It really depends on if the code can be optimized by the jvm, and how many executions run before the lambda container is recycled. It is certainly worth your time to experiment and see if this flag will improve your Java Lambda Cold Start and Average Duration execution times. Such large performance gains are often not so easily found.
While my experience was specific to AWS Lambda running Java, I'm sure if you are running Azure Cloud Functions on Java, or Google Cloud Functions on Java you could use the same tip to cut your cold start times and hopefully your average duration as well.
Posted on April 12, 2022
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.