© 2024 Kishan Kumar. All rights reserved.

Introduction to Vert.x

Vertx is called a "Polyglot" toolkit because it supports multiple programming languages and paradigms.

Nov 18, 2023


Vertx Logo

Vert.x is a toolkit that provides a set of libraries and utilities to build reactive applications that run on Java Virtual Machine (JVM) . These applications are resource-efficient, concurrent, and contain flexible features.

Reactive applications are event-driven, i.e., they respond to changes or events in an asynchronous and non-blocking way.

You must have read some articles calling vert.x a framework, but that is not exactly true. It is a toolkit, and there is a difference between a framework and a toolkit. A framework usually imposes a certain structure or convention on how the application should be organized, while a toolkit gives more freedom and flexibility to developers to choose the component and design.

Vert.x is called a "Polyglot" toolkit because it supports multiple programming languages and paradigms. Some of the languages it supports are Java, Kotlin, JS, Groovy, Ruby, etc.

It is like a collection of Lego bricks that can be assembled in various ways, rather than a pre-built model that has a fixed shape and function.

How does Vert.x handle concurrency and scalability?

Well, it handles it by using a reactive and event-driven approach. It uses the following concept to achieve high performance and scalability:

Event Loop

It is a thread that runs in an infinite loop, checking for and dispatching events to the appropriate handlers. Now, you might ask, what is an event or a handler?

Events can be network IO, file IO, timers, messages, etc. Whereas handlers are pieces of code that are registered by the application to handle specific events.

Let's take an example to understand it better. Say you have written a piece of code that runs whenever you receive a response from a server. First, you send the request to the server and go on doing some other task. As soon as the server sends you the response, your piece of code will run. Here, the server response is an event, and the piece of code that ran is your handler.

Note: the event loop must never be blocked by any handler. Otherwise, it will affect the responsiveness of the entire system.

Vert.x uses one event loop per core by default, which means that each event loop thread can handle a lot of concurrency using a small number of kernel threads. This increases the efficiency and throughput of the application significantly.

Worker Threads

Since we are not allowed to run long-running tasks on an event loop, there has to be a way to address this. Vert.x provides worker threads for that. These can be used to execute blocking or long-running tasks that would otherwise block the event loop threads.

Worker threads are managed by vert.x, and the application can specify how many worker threads it needs and how to deploy them.

But how does the worker thread communicate with the main thread? The answer lies in the event bus, which is a distributed and lightweight messaging system that allows different components to exchange messages in a publish-subscribe or point-to-point fashion.

Worker threads can also use callbacks and future to handle the results of the tasks.

Vert.x provides two ways to use worker threads:

  1. Worker verticles: these are verticles that run on the worker thread instead of the event loop thread. Verticles are the basic unit of deployment and execution in Vert.x, we'll soon touch upon this in this article. Worker verticles are designed for calling blocking code, as they won't block any event loops.
  2. executeBlocking: It is a method that allows the application to execute some blocking code on a worker thread safely. The method takes a callable or a handler as an argument, which contains the blocking code to be executed. The method also takes a boolean argument, which indicates whether the execution should be ordered or not. If ordered, the executions for the same context are executed serially; otherwise, they may be executed in parallel.
  3. 1vertx.executeBlocking(future -> {
    2            // check the thread that is executing this task
    3        System.out.println(Thread.currentThread().getName());
    4        try {
    5            // do some task that takes a long time to complete
    6            Thread.sleep(1000);
    7        } catch (InterruptedException e) {
    8            throw new RuntimeException(e);
    9        }
    10        future.complete();
    11    }, result -> {
    12        System.out.println(result.succeeded());


As touched earlier, they are the basic units of deployment and execution in Vert.x.

They are chunks of code that can be written in different languages, and they can be deployed by Vert.x in different ways. Verticles can be deployed as standard verticles, which run on the event loop threads, or as worker verticles, which run on the worker threads.

Verticles are loosely coupled and communicate with each other via the event bus. Verticles can also use shared data structures, such as maps and counters, to coordinate their state. Verticles are designed to be scalable and fault-tolerant, and they can be deployed across multiple nodes in a cluster.

Let's create a Test Verticle that spins up an HTTP Server

1public class TestVerticle extends AbstractVerticle {
2    @Override
3    public void start() {
4	// we are creating a simple HTTP server that listens
5        //on 9090 port using the verticle
6        vertx.createHttpServer().requestHandler(req -> {
7            req.response().end("Hello from Vert.x!");
8        }).listen(9090, result -> {
9            if (result.succeeded()) {
10                System.out.println("HTTP server started on port 9090");
11            } else {
12                System.out.println("Failed to start HTTP server");
13            }
14        });
15    }

Having created this verticle, we need to run it. We can create a main function (the entry point) and define the following snippet. This will execute the verticle

1public static void main(String[] args) {
2    Vertx vertx = Vertx.vertx(new VertxOptions().setWorkerPoolSize(10));
3    vertx.deployVerticle(new TestVerticle());

If you open your browser and navigate to http://localhost:9090 you'll see the following message

Event Bus

The event bus in Vert.x is a lightweight distributed messaging system that allows different parts of your application, or different applications and services, to communicate with each other in a loosely coupled way. The event bus supports the following communication patterns:

  • Publish-subscribe messaging: This is a one-to-many communication pattern where a sender publishes a message to a destination, and multiple receivers can subscribe to the same destination and receive the message. The sender does not need to know who the receivers are, and the receivers do not need to know who the sender is. This pattern is useful for broadcasting events or notifications to multiple interested parties.
  • Point-to-point messaging: This is a one-to-one communication pattern where a sender sends a message to a destination, and only one receiver can consume the message. The sender does not need to know the identity of the receiver, and the receiver does not need to know the identity of the sender. This pattern is useful for load balancing or distributing tasks among multiple workers.
  • Request-response messaging: This is a variation of the point-to-point pattern, where a sender sends a message to a destination and expects a reply from the receiver. The sender can specify a timeout for the reply and handle the success or failure of the request. This pattern is useful for implementing remote procedure calls or service invocations.

The event bus allows verticles to communicate with each other via messages.

Messages have a body and optional headers for storing metadata. Messages can be of any type, such as strings, buffers, JSON objects, etc. Vert.x provides a set of default codecs for encoding and decoding common types of messages and also allows the developer to register custom codecs for their own types. Codecs are responsible for serializing and deserializing messages, and they can also transform messages during the process. Codecs are useful for handling different formats of messages, such as XML, CSV, Protobuf, etc.

The event bus supports various types of transports, such as TCP, HTTP, WebSocket, etc. The event bus also enables Vert.x to scale across multiple nodes in a cluster by using a cluster manager, such as Hazelcast or Infinispan, to discover and connect the nodes. The event bus ensures that the messages are delivered reliably and consistently across the cluster.


In conclusion, Vert.x is a toolkit for building reactive applications on the JVM with resource-efficient, concurrent, and flexible features. It supports multiple languages, such as Java, Kotlin, Scala, Ruby, and JavaScript, and provides a rich ecosystem of modules and clients for various tasks, such as web APIs, databases, messaging, cloud, security, and more.

It uses the concepts of event loop, worker threads, verticles, and event bus to handle a lot of concurrency and scalability using a small number of kernel threads, which increases the efficiency and throughput of the application significantly. Vert.x also gives the developer a lot of flexibility and choice to design and implement the application according to their needs and preferences. It is one of the fastest and most scalable Java frameworks today, and it is suitable for building reactive, distributed, and microservice-based applications.

If you want to learn more about Vert.x, you can check out the official official website or the documentation .

.   .   .

The 0xkishan Newsletter

Subscribe to the newsletter to learn more about the decentralized web, AI and technology.

© 2024 Kishan Kumar. All rights reserved.