Project Loom: New Java Virtual Threads

By using Java 21 lightweight threads, developers can create high-throughput concurrent applications with less code, easier maintenance, and improved observability.

For many years, the primary way to propose changes to the Java language and the JVM has been through documents called JDK Enhancement Proposals (JEPs). These documents follow a specific format and are submitted to the OpenJDK website.

While JEPs represent individual proposals, they are frequently adopted as groups of related enhancements that form what the Java team refers to as projects. These projects are named rather randomly, sometimes after things (Loom, where threads are turned into cloth) or places (Valhalla, the fabled hall of Norse mythology) or the technology itself (Lambda).

Project Loom’s main objective is to enhance the capabilities of Java for concurrent programming by offering two key features: efficient virtual threads and support for structured concurrency.

Java Platform Threads

Every Java program starts with a single thread, called the main thread. This thread is responsible for executing the code within the main method of your program.

Tasks are executed one after another. The program waits for each task to complete before moving on to the next. This can lead to a less responsive user experience if tasks take a long time (e.g., network requests)

Both asynchronous programming and multithreading are techniques used to achieve some level of concurrency in your code, but they work in fundamentally different ways:

Asynchronous Programming focuses on non-blocking execution of tasks. It initiates tasks without waiting for them to finish and allows the program to continue with other work. This doesn’t necessarily involve multiple threads. It can be implemented even in a single-threaded environment using mechanisms like callbacks and event loops.

While asynchronous programming offers advantages, it can also be challenging. Asynchronous calls disrupt the natural flow of execution, potentially requiring simple 20-line tasks to be split across multiple files and threads. This complexity can significantly increase development time and make it harder to understand the actual program behavior.

Multithreading focuses on concurrent execution of tasks. It creates multiple threads, each running its own instructions, allowing them to potentially execute at the same time (depending on available resources). involves multiple threads running concurrently and focuses on dividing and executing tasks truly in parallel. 

While Java Virtual Machine (JVM) plays a crucial role in their creation, execution, and scheduling, Java threads are primarily managed by the underlying operating system’s scheduler.

As a result, Creating and managing threads introduces some overhead due to startup (around 1ms), memory overhead(2MB in stack memory), context switching between different threads when the OS scheduler switches execution. If a system spawns thousands of threads, we are speaking of significant slowdown here. 

Multithreading offers potential performance benefits, it introduces additional complexity due to thread management and synchronization.

The question that arises is: how to get the simplicity of synchronous operations with the performance of asynchronous calls?

Why Virtual Threads?

Platform threads are  expensive to create because the operating system needs a big chunk of memory just for each thread. 

This is because the memory can’t be adjusted, and it all gets used up for the thread’s information and instructions. On top of that, whenever the system needs to switch between threads, it has to move all this memory around, which can be slow.

In addition to above, we have complexity that multiple threads can access and modify the same data (shared resources) simultaneously. This can lead to race conditions, where the outcome depends on unpredictable timing of thread execution.

To simplify things, the easiest way to handle multiple tasks at once in Java seems like assigning each task its own worker. This approach is called “one task per thread”.

However, using such an approach, we can easily reach the limit of the number of threads we can create.

As an example, let’s create a simple maven module in  IntelliJ IDEA IDE, called PlatformThreads.

We create a class MyThread creating simple platform thread:

package org.example;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.time.Duration;

public class MyThread extends Thread {
   Logger logger = LoggerFactory.getLogger(MyThread.class);

   public void run(){
       logger.info("{} ", Thread.currentThread());
       try {
           Thread.sleep(Duration.ofSeconds(1L));
       } catch (InterruptedException e) {
           throw new RuntimeException(e);
       }
   }
}

In the Main class we have a method Create_10_000_Threads

package org.example;
public class Main {
   public static void main(String[] args) {
       Create_10_000_Threads();
   }
   private static void Create_10_000_Threads() {
       for (int i = 0; i < 10_000; i++) {
           MyThread myThread = new MyThread();
           myThread.start();
       }
   }
}

If we run this program, very quickly we get following console output:

[0.854s][warning][os,thread] Failed to start the native thread for java.lang.Thread "Thread-4063"
Exception in thread "main" java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached
	at java.base/java.lang.Thread.start0(Native Method)
	at java.base/java.lang.Thread.start(Thread.java:1526)
	at org.example.Main.Create_10_000_Threads(Main.java:9)
	at org.example.Main.main(Main.java:4)
[ERROR] Command execution failed.

This simple example shows how difficult it is to achieve “one task per thread” using traditional multithreading.

Java Virtual Threads

Enter Java Virtual Threads(JEP 425). Introduced in Java 17 with Project Loom, aim to reduce this overhead by being managed within the JVM itself, potentially offering better performance for certain scenarios.

Lets see for example how we can create virtual threads. We created a module in the same project named VirtualThreads.

MyThread class implementing Runnable interface:

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.time.Duration;
public class MyThread implements Runnable{
   Logger logger = LoggerFactory.getLogger(MyThread.class);
   @Override
   public void run() {
       logger.info("{} ", Thread.currentThread());
       try {
           Thread.sleep(Duration.ofSeconds(1L));
       } catch (InterruptedException e) {
           throw new RuntimeException(e);
       }
   }
}

In the main class we have a method that again creates 10 thousand threads, this time virtual ones.

package org.example;

public class Main {
   public static void main(String[] args) {

       Create_10_000_Threads();
   }

   private static void Create_10_000_Threads() {
       for (int i = 0; i < 10_000; i++) {
           Runnable runnable = new MyThread();
           Thread vThread = Thread.ofVirtual().start(runnable);
       }
   }
}

This time the program successfully executes with no error.

Unlike Platform Threads, Virtual Threads are created in Heap memory, and assigned to a Carrier Thread (Platform) only there is work to be done.

Virtual threads Architecture

This way we can create many virtual threads with very low memory footprint and at the same time ensure backward compatibility.

It’s important to note that Project Loom’s virtual threads are designed to be backward compatible with existing Java code. This means your existing threading code will continue to work seamlessly even if you choose to use virtual threads.

In traditional java threads, when a server was waiting for a request, the operating system was also waiting.

Since virtual threads are controlled by JVM and detached from the operation system, JVM is able to assign compute resources when virtual threads are waiting for response.

This significantly improves the efficiency of computing resource usage.

This new approach to concurrency is possible by introducing something called continuations and structured concurrency.

Continuation is a programming technique that allows a program to pause its execution at a specific point and later resume at the same point, carrying the necessary context.

The continuation object is used to restore the thread’s state, allowing it to pick up exactly where it left off without losing any information or progress

Structured concurrency(JEP 453) aims to provide a synchronous-style syntax for working with asynchronous tasks. This approach simplifies writing basic concurrent tasks, making them easier to understand and express for Java developers.

Structured concurrency simplifies managing concurrent tasks by treating groups of related tasks across different threads as a single unit. This approach makes error handling, cancellation, reliability, and observability all easier to manage.

Project Loom’s innovations hold promise for various applications. The potential for vastly improved thread efficiency and reduced resource needs when handling multiple tasks translates to significantly higher throughput for servers. This translates to better response times and improved performance, ultimately benefiting a wide range of existing and future Java applications.

Scroll to Top