Quarkus.io – A new Java framework

Quarkus.io
Quarkus.io

Hello guys,

2019 promises to be a promising year for the Java developer. At some point, we have experienced major changes in the Java platform, and if we stop to analyze, all the changes we are experiencing today began a few years ago.

With the mass adoption of public and private clouds (on-premises), the view of the computational resource was revised. Fortunately, it is no longer acceptable to think about the large server model with an excessive number of memory and processors over extremely heavy application servers and running one or more monolithic applications in a single or clustered environment.

However, it was up to the development to apply new techniques that fit this model of software delivery, and together with the strategy of using more servers with less computational resources than a monstrous server, then the approach of a software architecture oriented to microservices (microservices).

In general, microservices means adopting a strategy to slice a monolithic application into several pieces, which are executed independently (infrastructure, database, etc.) and communicate with each other using requests usually developed in REST.

The microservice oriented architecture proposal is so flexible that it allows each service to be developed using the technology that will bring the best possible result to the problem that is to be solved.

Until that moment, theoretically everything is fine, we divide our gigantic servers into smaller instances to facilitate the management, optimizing memory and processor consumption without idle resources, and slice our application using one or more technologies, making the most of programming languages.

However, not everything is as simple as it seems. We needed agility in the delivery of our applications and also a certain standardization in what we delivered to run in the cloud. The concept of containers with Docker comes up.

Docker basically enables software delivery through a container, which is nothing more than a new instance of Linux, preconfigured with its application ready to run, containing all of its configurations and services to be used, running inside another Linux. This is possible because Docker uses Linux resources such as cgroups and namespaces to segregate processes and thus execute them independently.

In a very simplified way, we can imagine that the developer delivers to a public or private repository a container to run on a Linux server, with an application server configured (or preconfigured), containing the application installed and ready to be executed, and the infrastructure team “only” runs this container in a cloud instance, without the need to configure one or more services, and that’s it, a new micro-service is ready to be consumed by some client.

However, not everything is a sea of ​​roses in software development. A few years ago the developer was only concerned with developing an application in a single deploy and ready, and now he has to deal with multiple servers, multiple technologies, communication between services, etc.

Noting that any of these Wildfly, Glassfish, WebSphere, or WebLogic servers consume a gigantic computing resource just to boot, have a very time-consuming startup process (we’ll talk about this later), and still have numerous configurations in a number of xmls files that range from According to numerous resources that you consume in your application, we really observed a scenario that was not very favorable to the adoption of micro-services, but since we are developers, we can never give up, and we embrace this proposal.

With this adoption, then the proposal of RedHat with the old Wildfly-Swarm, now affectionately called Thorntail, we can briefly say that is the Wildfly engine distributed through modules, that is, you enable on your application server only the EE modules that you consume, without the need to enable all the modules that a Java EE server needs to be able to be homologated that by default were already available on the application server.

Another important point regarding Thorntail is the ability to deliver the application through a fatjar, which can be executed through a command line, and in it the framework is responsible for instantiating the application by starting an application server preconfigured by a within the application itself.

mvn package 
java -jar [your-package].jar

Basically, the Java application delivery model using the fatjar concept, be it using Thorntail for Java EE or Sprint Boot for the Spring framework, allowing some relief to developers. A container containing a JVM, you can simply run the fatjar of your application through the command line and deploy will be available.

In this way, we can illustrate an application oriented to microservices as follows:

But remember that a bit above I talked about the startup time of an application? Yes, a disaster. Because this boot time easily exceeds 30 seconds, reaching 60 seconds in some cases. And this is not good in an environment that needs high availability.

Let’s look at some of the main disadvantages of a microservice-oriented architecture:

Distributed services;
The technical aspect of developing an application with distributed services is considerably more complex than a monolithic application;
 
Infrastructure management;
Managing a larger pool of smaller servers also has its disadvantages, mainly due to the constant need for communication between instances, depending on the number of servers, this can become a big problem;
 
Containers are also “heavy”;
Containers do not let the applications lighter, they just automate the way of delivery. What would be installed on the server, will be shipped inside the container.
 
Essential for cultural change to a DevOps model;
Bring to the development team aspects of server configuration, etc, it is imperative to change the culture of the team, and cultural change in many cases is complicated;

Okay, now what? How do we solve these headaches?

Here is the Serverless proposal, using the FaaS (Function as a Service) concept.

Serverless has the concept that we must outsource the provisioning of your infrastructure and servers to the cloud, and focus on nothing less than codes. Some of the key benefits of Serverless Architecture approach:

  • There is no need for server provisioning, maintenance and management;
  • The cost is generated by the execution of the functions;
  • Automatic platform scalability;
  • All service availability is guaranteed by PaaS;
  • In some PaaS you have the possibility of an AZ redundancy;
  • Increased developer productivity;
  • Significant time reduction for publishing and initiating cloud solutions;

With the proposal of adoption Serverless then suggests the idea of ​​nanoservices, which suggests to further fractionate the vision of service, as shown below:

Ok, are we coming out of a monolithic scenario where a single application could have thousands of classes all in one single project for the scenario that every HTTP method being called could be an isolated deploy? Yes, that’s exactly it.

To exemplify this with numbers, we follow a performance comparison of a function (which calculates 1000 times all prime numbers less than or equal to 1000), and we will analyze its availability of memory consumption, run time, and the final cost.

Memory Time execution Cost
128MB 11.722965 seg $ 0.024628
256MB 6.678945 seg $ 0.024628
512MB 3.194954 seg $ 0.024628
1024MB 1.465984 seg $ 0.024628
view raw serverless-cost.csv hosted with ❤ by GitHub

What is evident in this table is that making a function available in the Serverless model is much more cost-effective than a traditional container running in an instance in the cloud.

In a Serverless architecture using FaaS, it is fundamental to observe that the instances are created and destroyed very frequently, and at this point it is fundamental that the start of a new instance is very efficient. Until then, it was impossible for a Java developer to deliver a deploy of something that could be started in less than 1 second. So far…

Recently, RedHat released the public version of Quarkus. The goal is clear, to enable Java to lead in delivering solutions using Kubernets and Serverless.

Using GraalVM and OpenJDK HotSpot, Quarkus is designed to work with the most widely used standards in Java, as well as the new cloud-world specs available today from MicroProfile, so of course you have at your disposal (considering specs):

  • Beans Validations;
  • CDI;
  • Logs;
  • Microprofile Config;
  • Microprofile Fault Tolerance;
  • Microprofile Healt;
  • Microprofile JWT;
  • Microprofile Metrics;
  • Microprofile OpenAPI;
  • Microprofile OpenTracing;
  • Microprofile TypeSafe Rest Client;
  • JAX-RS;
  • JPA/JDBC;
  • Servlets;
  • Transactions;

And still access to the main frameworks and tools of the market such as:

  • Apache Camel;
  • Apache Kafka;
  • Hibernate;
  • Infinispan;
  • Jaeger;
  • Kubernetes;
  • Netty;
  • Prometheus;
  • RESTEasy;
  • Vert.x;

If you want to venture into Quarkus, you will find a framework that does not differ much from what you have been using to develop applications (especially with micro-service stacks).

You will have a maven starter very simple to create an initial project (we will see below), you can use maven or gradle for dependency management and build (in this post we will use maven), it has how to work with live-reload to facilitate the development mode and you can create your code in Java or Kotlin (we will use Java).

Creating a new project:

mvn io.quarkus:quarkus-maven-plugin:0.15.0:create \
    -DprojectGroupId=dev.horochovec \
    -DprojectArtifactId=quarkus-test \
    -DprojectVersion=0.0.1-SNAPSHOT \
    -DclassName=dev.horochovec.quarkus.MyResource \
    -Dpath="/resource"

After the creation of the project, we make the default build of the project:

We can verify that in the target directory of our project two .jar files were created. To start our application via the command line, and to deploy a container without using GraalVM, we can not be confused with fatjars.

All dependencies are inside the /lib directory, that is, you need to copy them to your container in order to execute the project, as follows:

And finally, we can then start our project using the standard version in Java, and we will have the following result:

However, at this stage we are still not using native delivery, through GraalVM, for this, we need to do the project packaging using the -Pnative additional parameter.

And now, we can run our project using the native version compiled and distributed to GraalVM, as below:

And consulting our web browser, we will have the following result:

It is very important to frighten the time it took for me to start my application and make it available to the end user. Only 0.006 seconds. Until then, this metric was unimaginable in the Java world.

When considering a Serverless architecture, the startup time of a function is paramount so that the availability of the project is considered efficient from a runtime point of view.

Obviously, we should remember that Java was a little behind in the start for this race in the Serverless architecture, however, no doubt Java gets on the right foot in this new software delivery proposal.

Welcome, Quarkus!

Share

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *