Image Blog Flavors of Concurrency in Java Threads Executors Forkjoin and Actors
December 10, 2014

Flavors of Java Concurrency

Java Application Development
Back to top

What Is Java Concurrency?

Java concurrency computing performs multiple processes during the same time frame. Java concurrency execution is centered around threads (independent execution paths).

In this post, we examine code that implements a concurrent solution to a sample problem and talk about what’s good about the given approach, what’s are some potential drawbacks, and what pitfalls may lay in wait for you.

We’ll go over the following methods and approaches to enable concurrent processing and asynchronous code:

  • Bare Threads
  • Executors & Services
  • ForkJoin framework and parallel streams
  • Actor model

We won't be looking at Fibers, also known as the lightweight threads, but you can find a great explanations what Fibers are and when to use them here.

To make it more interesting, I didn’t just provide any kind of code to illustrate the approach, but used a common task, so the code in every section is more or less equivalent to each other. Oh, and don't take the code for anything more than an illustration, most of the initialization code should not be in the same method and in general it's not a production level software examples. If you're interested in the Java 8 vs Java 7 performance benchmark blog I wrote, you should read that as well!

Back to top

Why is Java Concurrency Hard for Developers?

We live in a world where multiple things happen at the same time. Naturally, the Java programs we write reflect this trait and are capable of doing things concurrently. Except for Python code of course, but even then you can use Jython to run your programs on JVM and make use of the fabulous power of multiprocessor machines.

However, the complexity of concurrent programs does not sit well with the limited performance of human brains. By comparison, we downright suck: we are not created to think about multithreaded programs, assess concurrent access to limited resources and predict where errors or at least bottlenecks will occur.

As with many hard problems, humanity has come up with a number of solutions and models for concurrent computations that emphasize different parts of the problems, as well as making different choices for the computational tradeoffs that occur when we talk about achieving parallelism.

The task: Implement a method that takes a message and a list of strings that correspond to some search engine query page, issues http requests to query the message and returns the first result available, preferably as soon as it is available.

In case everything goes wrong, it’s acceptable to throw an exception or return null. I just tried to avoid looping forever waiting for the result.

Quick note: I cannot go really deep into the details of how multiple threads communicate or into the Java Memory Model at this time, but if you have a strong thirst for such things, you can start with my previous post on the subject: testing concurrency with JCStress harness.

So here we go, let’s start with the most straightforward and hardcore way to do concurrency on the JVM: managing bare threads by hand.

Back to top

Java Concurrency Method 1: Bare Threads

Unleash your inner code naturalist with bare threads! Threads are the most basic concurrency primitive there is. Java threads are actually mapped to the operating system threads and every Thread object represents one of the lower level computation threads.

Naturally, the lifecycle of a thread is taken care of by the JVM and scheduling is not your concern as long as you don’t have to make Threads communicate with each other.

Every thread gets its own stack space, consuming a part of the designated JVM process memory.

The Thread API is pretty straightforward, you feed it a Runnable and call .start() to begin the computation. There’s no good API to stop the Thread, you have to implement it yourself using some kind of boolean flag to communicate.

In the following example, we create a new Thread per search engine to be queried. The result of the querying is set into the AtomicReference, which doesn’t require locking or anything to ensure that only a single write will happen. Here we go!


private static String getFirstResult(String question, List engines) {
 AtomicReference result = new AtomicReference<>();
 for(String base: engines) {
   String url = base + question;
   new Thread(() -> {
     result.compareAndSet(null, WS.url(url).get());
   }).start();
 }
 while(result.get() == null); // wait for some result to appear
 return result.get();
}


The main benefit of using bare threads is that you are the closest to the operating system / hardware model of concurrent computations and the best thing is that this model is quite simple. Multiple threads run, communicate via shared memory and that’s it.

The biggest disadvantage of managing threads yourself is that it’s so easy to go overboard with the number of threads you spawn. They are costly objects that take up a decent amount of memory and time to create. Paradoxically, by having too few threads you’ll sacrifice potential parallelism, but by having too many will probably lead to memory issues and the scheduling becomes more complex.

However, if you need a quick and simple solution, you can definitely use this approach without much hastle.

Back to top

Java Concurrency Method 2: Executors and CompletionServices

Another option is to use the API for managing groups of threads behind the curtain. Luckily, our wonderful JVM offers us exactly that with the Executor interface. The executor interface definition is quite simple:


public interface Executor {
  void execute(Runnable command);
}


It abstracts away the details about how the Runnable will be processed. It just says, “Simple developer! You’re nothing but a bag of meat, give me the task, I’ll handle it.”

And what’s even cooler is that the Executors class offers a bunch of methods to create thread pools and executors that have sane configurations. We’ll go with a newFixedThreadPool(), which creates a predefined number of threads and doesn’t allow it to grow over time. It means that all submitted commands will have to wait in a queue when all threads are in use, but this is also handled by the executor itself.

On top of that there are ExecutorService to have control over the executor lifecycle and CompletionService that abstracts away even more details and acts like a queue for finished tasks. Thanks to that, we don’t have to worry ourselves with getting only the first result.

The call to service.take() below will return us only one result at a time.


private static String getFirstResultExecutors(String question, List engines) {
 ExecutorCompletionService service = new ExecutorCompletionService(Executors.newFixedThreadPool(4));

 for(String base: engines) {
   String url = base + question;
   service.submit(() -> {
     return WS.url(url).get();
   });
 }
   try {
     return service.take().get();
   }
   catch(InterruptedException | ExecutionException e) {
     return null;
   }
}


Going with executors and executor services is the right way if you want to have precise control over how many threads will your program generate and their exact behavior. For example, one important question to ponder about is what is the strategy for the tasks when all threads are busy doing other things? Do we spawn a new worker to handle it, up to some number of threads or infinitely? Do we put the task into a queue? What if that one is full? Grow the queue unboundedly?

Thanks to the JDK, many configurations that answer these questions are already available with sensible names for you, like the Executors.newFixedThreadPool(4) above.

The lifecycle of threads and services is also mostly handled with options to shut things down appropriately. The only downside is that the configuration could be simpler and more intuitive for beginners. However, you hardly find anything simple when talking about concurrent programming.

All in all, I personally think that for a larger system you want to use executors approach.

Back to top

Java Concurrency Method 3: Parallel Streams

Parallel streams were added to Java 8, and since then we have a straightforward way to achieve parallel processing of collections. Together with lambdas, they form a powerful tool for organising concurrent computation.

There are a couple of catches that can get to you if you’ll decide to go this way. First of all, you’ll have to grasp some functional programming concepts, which actually is more a benefit than a downside. Next, it’s difficult to be sure that the parallel stream is actually using more than a single thread for the operations, which is left for the stream implementation to decide. And if you don’t control the source of the stream, you can never be sure what it does.

Additionally, you have to remember that, by default, parallelism is achieved by using the ForkJoinPool.commonPool(). The common pool is managed by the JVM and is shared across everything that runs inside the JVM process. This simplifies configuration to the point where you don’t have to worry about it at all.


private static String getFirstResult(String question, List engines) {
 // get element as soon as it is available
 Optional result = engines.stream().parallel().map((base) -> {
   String url = base + question;
   return WS.url(url).get();
 }).findAny();
 return result.get();
}


Looking in the example above, we don’t really care where or by whom the individual tasks will be completed. However, it also means that in one careless move you can find yourself with multiple stalled parts of your application without knowing it. In another post on the subject of parallel streams, I described the issue in more detail and while there is a workaround, it’s not the most obvious solution in the world.

ForkJoin is a great framework, written and preconfigured by people much smarter than me. So that would be my first choice if I had to write a small program with some parallel processing.

The biggest downside is that you have to foresee the complications it might produce, which is not easy without a deep understanding of how the JVM works as a whole. And this most probably comes with experience only.

Back to top

Java Concurrency Method 4: Actors

Actors represent a model that is a somewhat odd addition to the groups of approaches we’re looking at in this post. There is no implementation of actors in the JDK, so you’ll have to include some library that can implement them for you.

In short, in the actor model you think that everything is an actor. An actor is a computational entity, like a thread was in the first example above, that can receive messages from naturally other actors, because everything is one.

In response to a message it can send messages to other actors or create new ones and interact with them, or just change its own internal state.

Pretty simple, but it’s a very powerful concept. The lifecycle and message passing is handled by the framework for you, you just specify what should the work units be. Additionally, actor model emphasizes avoiding global state, which comes with several benefits. You can often get supervision strategies, like a retry for free, much simpler distributed system design, fault tolerance and so forth.

Below is an example of the code using Akka Actors, one of the most popular JVM actors library that has a Java API. Actually, it has one for Scala too and, in fact, Akka is the default actor library for Scala, which once had an internal implementation of actors. Several JVM languages, for instance Fantom if you’re into that kind of stuff, have implementations of actors too. This just shows that the actor model is broadly accepted and seen as valuable addition to the language.


static class Message {
 String url;
 Message(String url) {this.url = url;}
}
static class Result {
 String html;
 Result(String html) {this.html = html;}
}

static class UrlFetcher extends UntypedActor {

 @Override
 public void onReceive(Object message) throws Exception {
   if (message instanceof Message) {
     Message work = (Message) message;
     String result = WS.url(work.url).get();
     getSender().tell(new Result(result), getSelf());
   } else {
     unhandled(message);
   }
 }
}

static class Querier extends UntypedActor {
 private String question;
 private List engines;
 private AtomicReference result;

 public Querier(String question, List engines, AtomicReference result) {

   this.question = question;
   this.engines = engines;
   this.result = result;
 }

 @Override public void onReceive(Object message) throws Exception {
   if(message instanceof Result) {
     result.compareAndSet(null, ((Result) message).html);
     getContext().stop(self());
   }
   else {
     for(String base: engines) {
       String url = base + question;
       ActorRef fetcher = this.getContext().actorOf(Props.create(UrlFetcher.class), "fetcher-"+base.hashCode());
       Message m = new Message(url);
       fetcher.tell(m, self());
     }
   }
 }
}

private static String getFirstResultActors(String question, List engines) {
 ActorSystem system = ActorSystem.create("Search");
 AtomicReference result = new AtomicReference<>();

 final ActorRef q = system.actorOf(
   Props.create((UntypedActorFactory) () -> new Querier(question, engines, result)), "master");
 q.tell(new Object(), ActorRef.noSender());

 while(result.get() == null);
 return result.get();
}


Akka actors use the ForkJoin framework for internal workers handling, and the code here is quite verbose. Don’t worry, most of it are the definitions of message classes: Message and Result, and then two different actors: Querier to organise the search across all search engines and URLFetcher to fetch a given URL. If there are more lines of code here, it’s because I didn’t want to inline all the things. The power of actor model comes from the API on the Props objects, where we can define specific routing patterns, a custom mailbox address for the actor, etc. The resulting system is extremely configurable and contains very little moving parts. Which is always a great sign!

One disadvantage of using the actor model is that it really wants you to avoid global state, so you have to design your application a bit differently, which may complicate a project in mid-migration. At the same time, it includes a number of benefits, so getting acquainted with a new paradigm and learning to use a new library is totally worthwhile.

Back to top

Understanding Java Concurrency

What is your default way to handle concurrency? Do you understand what model of computation lies behind it or is it just a framework with some Jobs or background tasks objects that automagically add async capabilities to your code? In order to gather more data and find out if I should continue exploring different approaches to concurrency more in depth, like for example, write a detailed blogpost about how Akka Actors work and what’s good and bad in its Java API, I’ve created a simple single question survey for you. Please, dear reader, if you got this far, answer it too. I appreciate your interactivity!

Back to top

What Is Java Concurrency: Conclusion

In this post we answered, "What is Java concurrency," as well as addressed different ways to add parallelism to your Java application. Starting with managing Java threads ourselves, we gradually looked at more advanced solutions involving different executor services, ForkJoin framework and the actor model of computation.

Wondering what to pick when you’re facing a real-world problem? They all have their own pros and cons, and mostly do different picks in the intuitiveness and ease of use vs. configuration and raw power of increase / decrease performance of your machine.

Eliminate redeploys in Java with JRebel. See how much development time you could save during your 14-day free trial. 

Try free

 

Back to top