Image Blog Spark Java Web Framework
March 23, 2016

Guide to the Spark Framework

Java Frameworks
Java Application Development

A few days ago, I took Spark out for a test drive. I've tried to create a small web application that does nothing really functional but explores the features offered by Spark framework, so I get more comfortable with its API and to see if Spark fits my style.

In this tutorial I want to share the takeaways I learned while exploring the Spark framework for Java. I'll start with a brief description of the Spark Java framework with the initial setup, then I cover how to specify routes, how to use requests and responses and end with examples on how to use static files, filters, and response transformers. 

What Is the Spark Framework?

Spark Java is a free, open-source web application framework that is designed to help users to quickly create web applications. It was created in 2011 by Per Wendel as a simple and expressive alternative to other popular frameworks like Spring, Play, and JAX-RS.

Spark Usage Statistics

According to our 2017 Java Developer Productivity Report, Spark Java (and other lightweight frameworks like Dropwizard and Ratpack) composed only 4% of overall framework usage. In a 2015 survey on the Spark website, 50% of users are using Spark in the creation of REST APIs, while another 25% are using it to create websites.

Setting Up Spark Framework

I started off with a fresh Gradle Java project. It was really straightforward to get the project up to the minimal Hello World functionality. If you're curious about the end result or want to clone the project and try Spark yourself, it's available on Github: spark-intro. All you need to do to get it up and running is: clone the project, run the Gradle build and then run the application using the built jar.


git clone https://github.com/shelajev/spark-intro
./gradlew build
java -jar ./build/libs/spark-intro-1.0-SNAPSHOT-all.jar


Now to use Spark, you just need to declare a couple of dependencies, shown below.


dependencies {
   compile 'com.sparkjava:spark-core:2.3'
   compile 'com.sparkjava:spark-template-thymeleaf:2.3'
}


In fact, if you just want a regular web-app, you can make with just the spark-core library. The thymeleaf library is a template library to produce HTML output more easily than writing HTML by hand to the response object.

Turning Your Spark Java Application to a Spark Web Application

Now, to turn your Java application into a web application you just need to register the handlers on some URLs using the Spark Java methods.


import spark.Request;
import spark.Response;
import static spark.Spark.get;
import static spark.Spark.staticFileLocation;

public class SparkApplication {

 public static void main(String[] args) {
   staticFileLocation("/public");
   get("/hello", SparkApplication::helloWorld);

 }

 public static String helloWorld(Request req, Response res) {
    return "hello world!";
 }
}


The example above is fully featured Spark application, which when run will start an embedded Jetty server. When you visit localhost:4567/hello, you'll see the hello world output.

Spark Framework or Spark Library?

This is actually pretty amazing, if you ask me. The best part about Spark is that it has a very consistent API which consists of just calling static methods from within your code. Indeed, it is a framework, in the sense that you specify the code to run and it wraps it in its own functionality, but it really feels more like a library with a touch of magic happening behind the scenes. You control what route mappings you wish to declare, what code is the responsible for handling the requests and how you like to handle everything else.

So far, Spark seems excellent for tiny applications or API backends. Let's look at other Spark features that are necessary to create a web application. Namely, a web framework should make it easy to specify the routes from URLs to the code, offer a nice API for request and response handling, the sessions, filters for the requests and the output transformers. Also a pluggable choice of a templating library is a great bonus!

Specifying Routes in the Spark Java Framework

To specify a mapping between the URLs that your server is handing and the Java code that actually handles the request you need to specify the routes. A route consists of the following pieces:

  • A verb (get, post, put, delete, head, trace, connect, options)
  • A path, which can include the parameter placeholders or wildcards: /hello, /users/:name, "/say/*/to/*"
  • A callback, which is just a Java function of the request and response pair (request, response) -> { }

To specify a route, you need to call a static method which make coincides with the HTTP verb you want to handle, for instance, spark.Spark.get("/hello", SparkApplication::helloWorld); in the example above. The routes are matched in the order in which they are specified. The first route that is matched to the incoming request will be executed. All in all, the route specification in Spark is simple and quite flexible.

Model Your Definitions for Long-Term Success

However, if you're not careful, you might lose yourself in these definitions, as your application grows. I believe that an external file for the routes, like for example the way the Play framework achieves it, is a cleaner way to define the routes. Or you could go full convention over configuration and store all the routes in the annotations on the actual classes, like the Spring framework.

Working With Request and Response Objects

Now we're mostly done with the web-server part. The application is up and running and we can redirect the execution flow to a particular class or method of our choosing. So here comes the most important part of any web framework: working with the request and response objects.

Setting the Response Object

Let's start with the response, because it's simpler. Naturally, the response allows you to set the status, the headers, the content of the body, or redirect the browser to another page. However, working with the response objects directly is not the most convenient way of serving the content. That's why you most probably want either to provide a response transformer, to say convert the data you want to send into a different format, like json, or render templates.

Using Thymeleaf Templates

In the sample application I used Thymeleaf templates, because thymeleaf is amazing. To utilize templates, you need to provide Spark with a template engine, which are available as libraries for almost any template library imaginable. You’ll need to rewrite your handlers to return ModelAndView objects. Here's a snippet from our sample application:


public static void main(String[] args) {
 get("/hello", SparkApplication::helloWorld, new ThymeleafTemplateEngine());
}

public static ModelAndView helloWorld(Request req, Response res) {
 Map<String, Object> params = new HashMap<>();
 params.put("name", req.queryParams("name"));
 return new ModelAndView(params, "hello");
}


The templates for the Thymeleaf template engine are located by default in the resources/templates directory, and the ModelAndView object references the template by its name relative to that directory. Now the template itself is just a simple example, but the engine supports all the glorious features that thymeleaf offers.
Hello, [[${name}]]!  

Most probably, if you intend to use Spark to serve an actual application, rather than an API backend, you'll use some sort of template engine. The request object is not that interesting by itself. It's not like you can come up with a new version of the class to model the HTTP request.

Using querymaps for Handling Parameters

However, in addition to the normal API for accessing the query parameters, the body, the headers and attributes that you can see below, Spark has a cool api called querymaps.


request.body();
request.attribute("name"); 
request.headers("name");
request.params("name");


Query maps take a parameter name and give you a collection of the parameters with that prefix. Then you can group the related parameters, say user[name] and user[age] into a single map.


request.queryMap("user").get("age").integerValue();
request.queryMap("user").toMap();


This makes handling parameters much easier, since you can always treat them as maps. What I really like is that there's no implicit parameter conversions going on, so you're fully in charge of how to process the query. However, the downside is that you won't be adding the validation code as easily as you might using alternative approaches.

Static files, Filters, and Response Transformers

First of all, let's talk about filters. More often than not, in the web application some functionality comes across the all entry points. For example you want to check if the user is logged in, or maybe log the request, or set some data into a ThreadLocal storage. This means that when you’re further down the line you'll have easy access to it. Or perhaps you just want to compress the results using gzip. All these cases require implementing horizontal functionality across the whole app. The Spark API for filters is really consistent with the rest of the framework.

Using before() and after() Methods

Spark offers the before() and after() methods where you can specify the logic for the requests as shown:


before((request, response) -> {
  log.trace("request: {}", request); 
});

after((request, response) -> {
    response.header("Content-Encoding", "gzip");
});


The example above will make Spark log the requests and enable the gzip compression on the output.

Adding a ResponseTransformer Instance

In general, Spark's API for various things is pretty straightforward. For example adding a ResponseTransformer instance into the route method will apply the transformation to the returned object. Here's the example of transforming the output into a json object.


Gson gson = new Gson();
get("/hello", (request, response) -> "Hello World", gson::toJson);


The ResponseTransformer interface has just one method, so you won't be muddling the code with complicated solutions.


public interface ResponseTransformer {
  String render(Object model);
}

 

Serving Static Files

Serving static files is even easier, usually you'll have two sources of the static files: internal -- packaged with the app, and external -- provisioned on the server.


staticFileLocation("/public");
externalStaticFileLocation("/path/to/static/files");


If feels amazing, because when working with Spark Java your code is really concise, flexible and easy to understand even at first glance.

What if I Need More?

Spark is a tiny web framework, which is both its main strength and its main weakness. It dspoes what it claims to do really well. Spark has an API which is consistent, simple, understandable, and flexible for handling requests, responses, filters and so on. Spark is amazing for creating small web applications or API backends. It doesn’t add much black magic into your code, so you always know what you expect from the application without any surprises. At the same time, it's extendable and you can plug in any template engine of your liking.

Other Considerations

However, if you’re writing a more substantial web application you most probably will want to consider other aspects including the database, validation, web-services invocations, nosql databases etc. In that case, I'd prefer something that comes with the batteries included, such as the Playframework or Spring framework. However, for a simple API endpoint Spark really managed to surprise me with how awesome it is. No wonder that the 2015 survey by Spark showed us that over 50% of Spark users use Spark to create REST APIs.

Additional Resources

Working in microservices? Finding the right framework is easier said than done. Our article explores a few of the most popular microservices frameworks and their relative strengths and weaknesses.

Read the Article