Java Libraries are Your Lambda Enemy

Stephanie Gawroriski (Xer)
IOpipe Blog
Published in
5 min readAug 23, 2018

--

So you are using AWS Lambda and your invocations seem to be running a bit slow, especially in the cold start area. Have you ever thought that this is caused by the libraries you are using? Java can be expensive on AWS Lambda, but it does not have to be at all.

Why is this a problem for Java on AWS Lambda?

Lambda at its core executes your code in short-lived containers which are created and destroyed as they are needed. This is where the cost saving comes in, you do not need to run a server constantly, you only pay for what you use. A cold start is when a new container is initialized from nothing, it’s name comes from the similar term cold boot (which is turning on your computer when it is off). When a request is not being handled by the lambda, the container will be “frozen”, it will be unable to perform any work. Additionally, containers have the limitation that they can only handle a single request at a time, meaning that if two requests come in at once two containers will exist to handle each of those requests each with their own resources.

Java on Lambda runs on top of OpenJDK 8. If you have been around in the past and remember the big deal about Client and Server virtual machines then you will know where this is coming from. If not, Java historically has had rather bad startup times due to the vast number of classes which have to be loaded, initialized, and then potentially compiled to native code. On Lambda, these steps will take up the bulk of your cold start duration. Luckily, native compilation is a background process because your code still runs in the pure interpreter. If you do know what the pure interpreter is then it is essentially part of the virtual machine which runs your Java byte-code without any optimizations, thus it is slow as a result of this. Usually, native compilation will be spread across multiple invocations since it runs in the background; this means invocations will get faster as more resources are available and more native code is executed. However, depending on how long it takes to handle a single request this can mean the first few dozen invocations will run at an increased latency.

Due to a combination of these factors, the more code that has to be initialized and compiled the more work has to be done in a short duration. If you are switching from traditional long-lived servers to serverless functions, which generally are more short-lived, then concerns which did not exist before may become a major concern. Additionally, if there is a large spike of invocations where multiple instances have to be started, the cost will easily multiply as your lambdas try to catch up.

How did I find this out?

While developing IOpipe’s Java agent, I ran into this issue many times. Before I knew anything about Lambda, I had cold start invocations that hit the default timeout of 30 seconds. I performed all my testing with the Lambda set to 128MB, the slowest tier, so execution time was increased greatly. To see how long it was I had to raise the default timeout which resulted in executions taking a few minutes to complete. This is quite a long duration, far too long to be viable.

Initially, the agent used Apache HttpClient. It is a popular library choice, yet it is too big. Just making a single HTTP connection to a server depends on a large number of classes to be read, initialized, and compiled. Just the sheer number of the classes introduced a large amount of latency.

After that I switched to OkHttp3; it fared much better, the cold start duration dropped to about 12 seconds. It is a much smaller than the gigantic Apache library is and as a result caused the virtual machine to load and execute faster. Although it is still pretty decent sized, switching to something far simpler will probably again reduce cold start duration. Another switch I made was from log4j2 to tinylog which cut the cold start duration in half, so there is a definitive pattern where smaller libraries load faster because they need less work to be performed on them.

What can be done to fix this now?

So the solution is to make your code lighter by reducing dependencies and thus the number of classes. Instead of looking for very fancy utility libraries you will have to look for smaller and lighter libraries or write your own. This effectively means that Apache is something you should not touch at all when writing lambda functions because Apache is gigantic. Even log4j2 can add 6 seconds to your cold start duration. Try your best to use dependencies which are lighter and perform the same tasks using less code. If you use mvn site on your project, you should be able to look into the dependency information and see just how many classes exist in a library. If you look at the aforementioned log4j2 the core and API JARs use a combined total of 1,205 classes. Just consider how many of those classes might be being initialized and compiled. Some libraries might be better than others depending on how they are used, but others might end up being very horrible for even the simplest of things. One major problem is that these libraries are ubiquitous, and there might not be something else to use. If you did not find an alternative library you may have to just write the missing functionality yourself. When it comes to performance, “Not Invented Here” is a viable and valid choice on Lambda until lighter libraries become more popular, but this is not a sustainable choice because it increases development time.

Any solutions that do not require a rewrite?

There are two main solutions that you can take without rewriting your code, they may increase the cost of your invocations.

The first solution is to increase the amount of memory that is allocated to your Lambda. This has a two-fold effect of increasing the available processing power and adding more memory which can be used by the virtual machine, which means less garbage collection and faster loading of classes. However, increasing the memory allocation from 128MB to 512MB will multiply your cost by 4x (so a 5 USD/m function is now a 20 USD/m function).

Another solution would be to keep your invocations warm, that they are periodically executed even though they did not have an application-specific reason to be invoked. However, this is complicated by the fact that now you will need to handle an extra input for your lambda just to keep it warm and consider the extra state that may now occur. Also, over time containers will be destroyed due to being active for too long and will have to be initialized again. This problem also does not solve situations where more than one request needs to be handled since multiple containers must be created. So, if you are just going to keep your lambda running at all times, there is no point in switching to Serverless.

Does it always have to be like this?

Nope! Newer versions of Java have ahead-of-compilation, this means that any code that is to run on the virtual machine may be compiled statically before the program is run. When Amazon adopts a newer version of Java, they may be able to take advantage of this feature which would be quite advantageous and reduce the cost of lambda executions.

So, if your cold start duration is very high and is causing you trouble, then using smaller and simpler libraries can improve the performance of your Java AWS Lambda functions and reduce the cost.

--

--

I develop SquirrelJME which is an implementation of Java ME 8 (CLDC 8/MEEP 8/MIDP 3). I love squirrels, they are adorable and cute 🐿️!