the/experts. Blog

Cover image for With buildpacks to the moon!
kevin-bos
kevin-bos

Posted on

With buildpacks to the moon!

If you've read my previous blog on buildpacks, you'll know just how powerful they can be in simplifying the deployment process of cloud-native applications. In this blog, we're going to take things up a notch and show you how to effectively deploy buildpacks.

While buildpacks offer a simple, efficient way of packaging applications, deploying them can be somewhat of a daunting task. In this blog, we'll walk you through the key steps involved in deploying buildpacks, giving you the knowledge you need to expertly handle your deployment process. So, let's dive in, and you'll be deploying your buildpacks like a pro in no time! To keep it simple, we're going to make use of a PaaS solution, so we don't have to worry about the infrastructure.

As discussed in the previous blog, Heroku was one of the creators of Buildpacks. It would be nice to host our application on their service, right?!?

Unfortunately, Heroku has canceled their free tier of services. Besides their cancelation of the free tier services, there are a few other reasons not to use it anymore.

So what other choices do we have? There are of course plenty of PaaS platforms out there, but there is one that is gathering more support, and that is fly.io.

Fly.io launched back in 2020 with great promises:

fly.io is a way to run Docker images on servers in different cities and a global router to connect users to the nearest available instance. We convert your Docker image into a root file system, boot tiny VMs using an Amazon project called Firecracker, and then proxy connections to it. As your app gets more traffic, we add VMs in the most popular locations.

Even some former Heroku employees describe fly.io as “the Reclaimer of Herokus Magic” :

fly.io is a Platform-as-a-Service that hosts your applications on top of physical dedicated servers run all over the world instead of being a reseller of AWS. This allows them to get your app running in multiple regions for a lot less than it would cost to run it on Heroku.

The great advantage is that they are supporting Buildpacks !

If you want to learn more about the fly.io architecture, there’s a great overview in the docs. And the docs from AWS are great: Firecracker Micro-VM framework by AWS (although you won’t notice it using fly.io).

Using Fly.io

First, we need the fly.io CLI called flyctl . This can be easily done with homebrew on a mac:

brew install flyctl
Enter fullscreen mode Exit fullscreen mode

When we have the CLI installed, we can sign up to fly.io via

fly auth signup
Enter fullscreen mode Exit fullscreen mode

Or, if you already have an account, you can log into fly.io with fly auth login. As you may have already noticed, you can use flyctl or fly interchangeably on most command lines.

Now we have everything in place to deploy our application to the fly.io cloud.

As stated in the fly.io docs :

Fly.io allows you to deploy any kind of app as long as it is packaged in a Docker image. That also means you can just deploy a Docker image and as it happens we have one ready to go in flyio/hellofly:latest.

In the previous blog we created an image, so we can make use of this to deploy!

fly launch --image ghcr.io/kevinbos-mte/buildpacks-demo:latest
Enter fullscreen mode Exit fullscreen mode

This command will ask a few questions at first (region, app name, etc) and then deploy our Spring Boot app.

Notice that it also created a fly.toml configuration file in our project. We can configure this file to fit our needs.

The file is telling fly.io to use the latest version of our image.

[build]  
  image = "ghcr.io/kevinbos-mte/buildpacks-demo:latest"
Enter fullscreen mode Exit fullscreen mode

To expose the service correctly to the public, it also defines some default routes. Don't forget to update these settings if you're running your application on a different port or protocol!

[[services]]  
  http_checks = []  
  internal_port = 8080  
  processes = ["app"]  
  protocol = "tcp"  
  script_checks = []  
  [services.concurrency]  
    hard_limit = 25  
    soft_limit = 20  
    type = "connections"  

  [[services.ports]]  
    force_https = true  
    handlers = ["http"]  
    port = 80
Enter fullscreen mode Exit fullscreen mode

We can now deploy our application with the command fly deploy on our local machine.

Automatically deploy our artifact

In the previous blog, we created a Github workflow to publish the artifact, we can extend this workflow to automatically deploy our artifact to fly.io.

To deploy our artifact from the GitHub Container Registry to fly.io, we need to create an auth token. To generate this token simply run:

fly auth token
Enter fullscreen mode Exit fullscreen mode

We can use this token in the GitHub Actions workflow by adding a secret for the token. To do this, go to your GitHub Repository’s Settings and click on Secrets/Actions. Here you can create a new secret by clicking on New repository secret, give it the name FLY_API_TOKEN and insert the token that you generated:
Adding fly.io token to github repo

Let's extend the workflow file with:

autodeploy:  
  runs-on: ubuntu-latest  
  env:  
    FLY_API_TOKEN: ${{ secrets.FLY_API_TOKEN }}

- name: Install flyctl via https://github.com/superfly/flyctl-actions  
    uses: superfly/flyctl-actions/setup-flyctl@master  

- name: Deploy our Spring Boot app to fly.io  
    run: flyctl deploy --image ghcr.io/kevinbos-mte/buildpacks-demo:latest
Enter fullscreen mode Exit fullscreen mode

First, we add the FLY_API_TOKEN as an environment variable so that flyctl can use it for the deployment.

Second, we need to install flyctl in GitHub Actions. This is easily done with the official flyctl-actions. Finally, we simply deploy our application with the flyctl deploy command.

So after running our workflow, we successfully deployed our application to fly.io! Right??

Spring boot is too heavy!

Oh no! Sadly, our Spring Boot app hasn’t been deployed successfully. Having a look into the monitoring of our app at https://fly.io/apps/spring-boot-flyio/monitoring we should see the problem leading to an error unable to calculate memory configuration:

$ fly logs -a buildpacks-demo

Waiting for logs...

2022-12-13T16:46:25.020 app[2a62c628] ams [info] Setting Active Processor Count to 1
2022-12-13T16:46:25.145 app[2a62c628] ams [info] Calculating JVM memory based on 194536K available memory
2022-12-13T16:46:25.145 app[2a62c628] ams [info] For more information on this calculation, see https://paketo.io/docs/reference/java-reference/#memory-calculator
2022-12-13T16:46:25.145 app[2a62c628] ams [info] unable to calculate memory configuration
2022-12-13T16:46:25.145 app[2a62c628] ams [info] fixed memory regions require 388194K which is greater than 194536K available for allocation: -XX:MaxDirectMemorySize=10M, -XX:MaxMetaspaceSize=80994K, -XX:ReservedCodeCacheSize=240M, -Xss1M * 50 threads
2022-12-13T16:46:25.146 app[2a62c628] ams [info] ERROR: failed to launch: exec.d: failed to execute exec.d file at path '/layers/paketo-buildpacks_bellsoft-liberica/helper/exec.d/memory-calculator': exit status 1
2022-12-13T16:46:25.929 app[2a62c628] ams [info] Starting clean up.
Enter fullscreen mode Exit fullscreen mode

We could solve this by scaling up our fly.io instance. Simply run the command fly scale memory 1024 to configure it.

WARNING: Scaling up the memory above 256 MB will run our Spring Boot app on fly.io. But this will also introduce costs! The pricing docs tell us what the free tier limits are about:

Resources included for free on all plans:
Up to 3 shared-cpu-1x 256mb VMs
3GB persistent volume storage (total)
160GB outbound data transfer

Let's resolve this in a better way.

Native images to the rescue!

We can make use of the GraalVM native image technology to solve our memory problems! Native image is a utility for converting Java applications into fully compiled binary code. The process of creating a native-image is called ahead-of-time compilation.

The benefits of using a native image are:

Lower Memory Footprint

If we compile our code into the native-image we can throw away a lot of stuff from our executable. Those JVM features become unused because they don't have to be there to make our code more efficient
What makes the Lower Memory Footprint:

  • No metadata for loaded classes
    • We still need to have the compiled code in our non-heap memory. But it's much more space-efficient than keeping all the metadata for dynamically loaded classes in Metaspace
  • No profiling data for JIT, no Interpreter Code, no JIT structures
    • JVM collects profiling data about the application to determine what optimizations could be applied. This is not needed because our bytecode is already in native-code. This enables the compiler to throw away the entire Segment Code Cache that contains profiling data and interpreters

Faster Startup

What makes the Startup Faster?

  • No classloading
    • All classes have been already loaded, linked, and partially initiated. However, this means that only classes and methods that were traced during the image-build process are included in binary and can be used at runtime.  
  • No interpreted code 
    • The generated native code doesn't have to be fully efficient because we don't use profile-guided optimizations that are part of the server compiler(called opto or C2)
  • No burnt CPU for profiling and JIT-ing, simple GC to start (SerialGC)
    • We don't have to start JIT Compiler and JIT our code to make it performant.
  • Generating Image Heap during the native-image build
    • The native application is partially initiated, which means that we can run the Initialization process for some specific classes at build-time (run class static blocks) to prepare some part of the heap and speed up the startup. Please, read an article from Christian Wimmer about Class Initialization GraalVM Native Image.

Sounds good right? Let's check how we can use a native image to solve our problems.

Spring boot 3.0

On the 24th of November, VMware released Spring Boot 3.0! This new version contains a lot of cool new features and improvements. For us, the most notable to resolve our memory issue:

Great, we can now harness the power of native images without doing an effort. Let's make use of this power and deploy our artifact to fly.io

To start making use of the native-image we need to update the paketo command:

- name: Build app with pack CLI & publish to GitHub Container Registry  
  run: |  
    pack build ghcr.io/kevinbos-mte/buildpacks-demo:latest \  
        --builder paketobuildpacks/builder:tiny \  
        --buildpack paketo-buildpacks/graalvm \  
        --buildpack paketo-buildpacks/java-native-image@7.41.0 \  
        --path . \  
        --env "BP_JVM_VERSION=17" \  
        --env "BP_NATIVE_IMAGE=true" \  
        --env "BP_SPRING_CLOUD_BINDINGS_DISABLED=true" \  
        --env "BP_OCI_SOURCE=https://github.com/kevinbos-mte/buildpacks-demo" \  
        --cache-image ghcr.io/kevinbos-mte/buildpacks-demo-paketo-cache-image:latest \  
        --publish
Enter fullscreen mode Exit fullscreen mode

We need to tell paketo that we want to build a native-image, we do this by adding BP_NATIVE_IMAGE=true to the environment variables. For most of the projects, this is already enough to create a native-image. Spring boot 3.0 requires GraalVM 22.3 this is not yet supported out of the box by paketo, so we need to specify which buildpacks we want to use in this case paketo-buildpacks/graalvm and paketo-buildpacks/java-native-image@7.41.0.

Running our workflow again results unfortunately in an error:

Executing native-image -H:+StaticExecutableWithDynamicLibC -jar /workspace 
Error: /workspace is a directory. (-jar requires a valid jarfile)
Enter fullscreen mode Exit fullscreen mode

This is due to the way Gradle builds work, by default, it produces both a boot-ified and regular JAR file. By itself, this isn't a problem, but Buildpacks need to handle multiple JAR files differently than a single JAR file, and some things like a native-image build still only work with single JAR files.
The good news is that there is an easy fix, we just need to tell Gradle to only build the boot-ified JAR.

In the build.gradle we can set:

tasks.getByName<Jar>("jar") {
    enabled = false
}
Enter fullscreen mode Exit fullscreen mode

So with this fix, we can finally deploy our native artifact to fly.io and we can see that it works now!
And look at that beautiful low-memory usage!

Fly.io memory usage

Final thoughts

The use of buildpacks is an excellent choice to simplify the deployment process of cloud-native applications, even for those without significant deployment experience. Platforms like fly.io make it even easier to get started with buildpacks by supporting them out of the box.

Furthermore, buildpacks offer several advantages, such as modularization, performance optimization, ease of maintenance, and metadata provision. In this blog, you have seen how to use fly.io to deploy a Spring Boot application using buildpacks and GraalVM native images to address memory problems.

In summary, using buildpacks is a reliable and efficient way to deploy your applications and is highly recommended for any application deployment requirements.

Let me know if you agree or have a different opinion! :D

Discussion (1)

Collapse
sebastiaankoot profile image
Sebastiaan Koot

Useful article Kevin!