Processing AST with ActiveSpecializer
valerialistratova
Posted on November 5, 2020
Intro
ActiveSpecializer is a lightweight library that automagically optimizes your code for JVM and makes it significantly faster. Unlike traditional compiler optimization techniques, it relies on an alternative concept of using runtime information of classes instances, rewriting the bytecode in runtime. It transforms all class fields into static class fields, and de-virtualizes all virtual methods calls, replaces them with static method calls.
It is a really powerful tool, particularly when it comes to tree-like structures. Let's take a look at a use case.
We will use ActiveSpecializer for transforming ASTs that we'll receive after equation parsing. In the end of the article we'll make some benchmarks to see how ActiveSpecializer coped with the problem. Let's get started!
Parsing Equation to AST
This tutorial is based on the Parsec calculator tutorial. Yet, it has an important distinction. In the original tutorial Parsec parses expressions to double values:
Parser<Double> parser = new OperatorTable<Double>()
.infixl(op("+", (l, r) -> l + r), 10)
.infixl(op("-", (l, r) -> l - r), 10)
.infixl(Parsers.or(term("*"), WHITESPACE_MUL).retn((l, r) -> l * r), 20)
.infixl(op("/", (l, r) -> l / r), 20)
.prefix(op("-", v -> -v), 30)
.build(unit);
Instead, we will parse the expressions to an AST:
private static final Parser<CalculatorExpression> EXPRESSION = new OperatorTable<CalculatorExpression>()
.infixl(DELIMITERS.token("+").retn(Sum::new), 10)
.infixl(DELIMITERS.token("-").retn(Sub::new), 10)
.infixl(DELIMITERS.token("*").retn(Mul::new), 20)
.infixl(DELIMITERS.token("/").retn(Div::new), 20)
.infixl(DELIMITERS.token("%").retn(Mod::new), 20)
.prefix(DELIMITERS.token("-").retn(Neg::new), 30)
.infixr(DELIMITERS.token("^").retn(Pow::new), 40)
.build(ATOM);
For example, 2 - 4 * 6
will be parsed like this:
We will parse to an AST the following equation: ((2 + 2 * 2) * -x) + 5 + 1024 / (100 + 58) * 50 * 37 - 100 + 2 * x ^ 2 % 3
.
ActiveSpecializer will transforms the received AST to a set of static final classes with baked-in values of the provided equation. During runtime JIT significantly optimizes and inlines these classes. As a result, we will receive an optimized reusable expression instance.
All you need to do to use ActiveSpecializer is:
public static final Parser<CalculatorExpression> PARSER = EXPRESSION.from(LEXER, IGNORED);
private static final Specializer SPECIALIZER = Specializer.create(Thread.currentThread().getContextClassLoader());
public static void main(String[] args) {
double x = -1;
CalculatorExpression expression = PARSER.parse("((2 + 2 * 2) * -x) + 5 + 1024 / (100 + 58) * 50 * 37 - 100 + 2 * x ^ 2 % 3");
CalculatorExpression specialized = SPECIALIZER.specialize(expression);
System.out.println(specialized.evaluate(x));
}
Benchmarks
It’s time for some benchmarks. We will process the aforementioned equation in three different ways and compare the performance:
String source = "((2 + 2 * 2) * -x) + 5 + 1024 / (100 + 58) * 50 * 37 - 100 + 2 * x ^ 2 % 3";
manual = x -> ((2.0 + 2.0 * 2.0) * -x) + 5.0 + 1024.0 / (100.0 + 58.0) * 50.0 * 37.0 - 100.0 + 2.0 * (Math.pow(x, 2.0)) % 3.0;
ast = SpecializerCalculatorExample.PARSER.parse(source);
specialized = SPECIALIZER.specialize(ast);
- manually enter the equation
- parse the equation to an AST and evaluate it without specialization
- parse the equation to an AST and evaluate it with specialization
We used JMH in AvarageTime mode as a benchmark tool. All the results are represented as nanoseconds per operation.
Benchmark Mode Cnt Score Error Units
CalculatorBenchmark.ast avgt 10 828.924 ± 8.369 ns/op
CalculatorBenchmark.manual avgt 10 115.985 ± 1.009 ns/op
CalculatorBenchmark.specialized avgt 10 117.635 ± 1.500 ns/op
As you can see, specialized AST was processed just as fast as a a manually typed equation, while non-specialized AST was processed 8 times slower. ActiveSpecializer proved its substantial efficiency!
Posted on November 5, 2020
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.
Related
November 20, 2024