Turns out Java can do serverless right — with GraalVM and Spring, cold starts are tamed and performance finally heats up. Credit: Shutterstock Java’s powerful and mature ecosystem has long been a top choice for enterprise applications. However, its traditional strengths have presented challenges in serverless environments, particularly concerning the performance penalty known as the cold start. My goal was to build high-throughput, event-driven systems on AWS Lambda without abandoning the Java ecosystem, which meant tackling the cold start problem head on. This is the story of how I tamed the cold start using a combination of modern tooling, robust architectural patterns and a shift in how I think about compiling applications. The Java challenge in a serverless world The Java Virtual Machine (JVM) is a marvel of engineering, optimized for long-running, high-performance applications. Its just-in-time (JIT) compiler analyzes code as it runs, making sophisticated optimizations to deliver incredible peak performance. But this strength becomes a weakness in a serverless model. When a Lambda function starts cold, the JVM must go through its entire initialization process: loading classes, verifying bytecode and beginning the slow warm-up of the JIT compiler. This can take several seconds — an eternity for a latency-sensitive workflow. AWS has attempted to address this with Lambda SnapStart, which works by caching a snapshot of an initialized microVM. It’s a clever infrastructure-level mitigation, but for many enterprise needs, it comes with limitations and subtle risks related to state uniqueness and stale data. I realized that I needed a solution that didn’t just mitigate the cold start but eliminated it at the application level. My search for a solution led me to GraalVM Native Image, a technology that fundamentally changes the nature of the deployed artifact. The native image transformation: A deep dive into GraalVM and Spring GraalVM Native Image is an ahead-of-time (AOT) compiler that transforms Java bytecode into a self-contained, platform-specific native executable. Unlike the JVM, which interprets bytecode at runtime, a native executable is machine code that runs directly on the operating system. The magic behind this is its closed-world assumption. At build time, the native-image tool performs a deep static analysis of the application, starting from its main entry point. It determines every single piece of code that is reachable and discards everything else. This aggressive dead-code elimination, combined with AOT compilation, results in an incredibly small and fast-starting binary. For my Lambda functions, I chose Spring Cloud Functions. It provides a simple, functional programming model (Function<T, R>, Consumer<T>, Supplier<T>) that abstracts away the underlying platform specifics. This allowed me to focus purely on business logic, writing standard Java functions that the framework automatically wires up to the AWS Lambda runtime. Getting the build right: Maven configuration To make this work, your project’s pom.xml needs a few key components. First is the spring-cloud-function-adapter-aws dependency. This adapter is the crucial bridge; it takes your generic Spring Cloud Function and converts it into a format that can be invoked by the AWS Lambda runtime, handling the translation of incoming AWS events into plain Java objects. XML <dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-function-adapter-aws</artifactId> </dependency> </dependencies> Next, you need two essential plugins. The spring-boot-maven-plugin is standard for any Spring Boot application and is responsible for packaging your code into an executable JAR. The native-maven-plugin from the GraalVM team integrates the native-image compiler directly into your Maven life cycle, orchestrating the complex AOT compilation process. XML <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> <plugin> <groupId>org.graalvm.buildtools</groupId> <artifactId>native-maven-plugin</artifactId> </plugin> </plugins> </build> Handling environment-specific configurations A common practice in Spring is to use the @Profile annotation to create different bean configurations for different environments (e.g., dev, test, prod). However, because GraalVM performs a closed-world static analysis at build time, the dynamic nature of profiles doesn’t translate directly to a native image. The application is compiled for a single, specific configuration. The best practice to mitigate this is to externalize environment-specific settings using Lambda environment variables. Instead of relying on @Profile to switch between, for example, a local DynamoDB instance and a cloud-based one, you can define the database endpoint URL as an environment variable. Your code reads this value at startup to connect to the correct resource. This approach keeps the compiled artifact environment-agnostic while allowing for flexible configuration at deployment time. The payoff: A quantitative analysis of performance gains After re-architecting my Lambda function to be compiled as a native image, the results were transformative. I compared three deployment approaches: a standard JVM-based function, a native executable packaged as an OCI container image and a native executable packaged as a Zip archive. The data speaks for itself. MetricJVM (Spring Cloud Function)Native Image (OCI Container)Native Image (Zip Archive)Init Duration (ms)5770.513400655.32Billed Duration (ms)68733746757Max Memory Used (MB)21877159Package Size (MB)N/A43.9630.7 Let’s break down what these numbers mean: JVM vs OCI: The OCI container approach — while promising due to its extremely low memory usage (77 MB) — introduced its own overhead. The Init Duration of 3.4 seconds was not from the native executable itself (which started in only 254 ms) but from the time AWS Lambda took to download and start the 44 MB container from ECR. JVM vs. Native: The baseline JVM function had a cold start (Init Duration) of over 5.7 seconds. Simply compiling to a native executable and deploying as a Zip archive reduced this to just 655 ms — a staggering ~9x improvement. Once warm, the native image function was incredibly fast, with a typical billed duration of only 20 ms while using 160 MB of memory. This demonstrates the near-instantaneous performance after the initial cold start. For developers who need to squeeze out every last millisecond of performance, it’s worth noting that removing the Spring Cloud Functions framework and writing a plain Java function compiled to a native image can further reduce cold start times by another 30% to 40%. This is a trade-off between the convenience and abstraction of the framework versus raw performance. Building for production: The CI/CD pipeline A key challenge with native images is ensuring the build environment matches the target runtime. A native executable compiled on macOS won’t run on AWS Lambda, which uses Amazon Linux. To solve this, the native image must be built inside a Docker container that mirrors the Lambda runtime, such as Amazon Linux 2023. This process is computationally intensive; the entire cycle to build the native executable and publish the artifact took about four minutes. However, this is time spent in the build pipeline, not at runtime, which is a worthy trade-off for the massive performance gains. Packaging for a custom runtime Because the output is a native executable and not a JAR file, we can’t use the standard AWS-provided Java runtimes. Instead, I deploy it using a custom runtime. This requires creating a simple shell script named bootstrap that tells Lambda how to start the application. This file, which must be marked as executable, is placed in the root of the deployment package. Bash #!/bin/sh # Execute the native binary ./spring-serverless-function-listener The build container The Dockerfile below outlines the process for building the native executable and packaging it with the bootstrap script into a final zip archive. Dockerfile # Use the official Amazon Linux 2023 image as our build environment FROM amazonlinux:2023 # Define versions for our tooling ENV JAVA_VERSION=21.0.8+13 ENV NIK_VERSION=23.1.8+1 ENV JAVA_HOME=/opt/bellsoft-liberica-vm-core-openjdk${JAVA_VERSION} ENV PATH=$JAVA_HOME/bin:$PATH # Install required build tools and dependencies for native-image RUN dnf install -y tar gzip gcc glibc-devel zlib-devel zip && \ dnf clean all # Download and extract a specific GraalVM distribution, BellSoft Liberica NIK RUN curl --location https://github.com/bell-sw/LibericaNIK/releases/download/${NIK_VERSION}-${JAVA_VERSION}/bellsoft-liberica-vm-core-openjdk${JAVA_VERSION}-${NIK_VERSION}-linux-amd64.tar.gz | tar -xzf - -C /opt/ # Copy the application source code into the container WORKDIR /workspace COPY.. # Run the Maven build to compile the native executable RUN./mvnw -Pnative native:compile -DskipTests # After compilation, zip the executable and the bootstrap script RUN cd target && zip -r spring-serverless-function.zip spring-serverless-function-listener../bootstrap && cd.. # Keep the container running so we can copy the artifact out CMD ["tail", "-f", "/dev/null"] Extracting the artifact This Docker container is used as a temporary build agent. The final step in the pipeline is not to push this image, but to extract the compiled Zip archive from it. The CMD [“tail”, “-f”, “/dev/null”] instruction keeps the container alive after the build so that subsequent pipeline steps can copy the artifact out. The process in a pipeline would be: Build the Docker image (e.g., native-compile-image). Run the container in the background. Copy the final zip archive from the container to the build agent’s local filesystem. Bash # Run the container in detached mode, giving it a name for reference docker run -d --rm --name native-compile-container native-compile-image # Copy the compiled zip file from the running container to the build agent's workspace docker cp native-compile-container:/workspace/target/spring-serverless-function.zip./target/ This process leaves you with a single, deployable spring-serverless-function.zip file, which can then be uploaded to an artifact repository. Deploying with the AWS CDK With the deployment artifact ready, I can define the Lambda function using an infrastructure-as-code tool like the AWS Cloud Development Kit (CDK). The CDK code points to the zip file created by the pipeline and configures the Lambda function. TypeScript // Example CDK code for defining the Lambda function import * as path from 'path'; import * as lambda from 'aws-cdk-lib/aws-lambda'; // Assuming 'this' is a CDK Stack context const functionAssetDirectory = '/runtime/target/spring-serverless-function.zip'; const runtimeDir = path.join(__dirname, '..', functionAssetDirectory); new lambda.Function(this, 'MyNativeLambda', { runtime: lambda.Runtime.PROVIDED_AL2023, code: lambda.Code.fromAsset(runtimeDir), handler: 'org.springframework.cloud.function.adapter.aws.FunctionInvoker::handleRequest', memorySize: 256, timeout: Duration.seconds(30), //... other configurations }); Here, lambda.Runtime.PROVIDED_AL2023 specifies that we are using a custom runtime. The code property points to our zip archive. Crucially, the handler is still set to Spring’s generic FunctionInvoker. Conclusion: Java’s place in the serverless era This journey demonstrates that with modern tools like GraalVM, Java is a highly competitive choice for serverless workloads, effectively addressing historical performance concerns. The ability to compile to a native executable presents a strong option that rivals the startup performance of traditionally faster languages. To scale this solution and increase developer productivity, I packaged this entire approach into a reusable blueprint. This was published in an internal marketplace, allowing any engineer to spin up a new native Lambda function quickly. The blueprint includes the pipeline to build the native image and the CDK code for deployment, hiding the underlying complexity. This allows teams to focus immediately on writing business logic without needing to become experts in the nuances of native image compilation. This article is published as part of the Foundry Expert Contributor Network.Want to join?