Architecturing Spring services

brunooliveira

Bruno Oliveira

Posted on October 7, 2020

Architecturing Spring services

[disclaimer: this post will go in depth into many aspects of one possible type of architecture for microservices, and will use Java and SpringBoot to showcase examples. It will not show a full and complete working SpringBoot setup and it will assume that basic principles can be applied to any technology stack. The foundational ideas are what matters most from an architecture point of view]

Introduction

Microservices are best described as an architectural style that structures an application as a collection of services that have the following characteristics:

  • Loosely coupled;
  • Highly maintainable and easy to test;
  • Owned by a small team;
  • Structured around business concepts;
  • Allow for structured and independent deployment;

It's an architectural style that has gained traction during recent years, with many companies sharing their experiences with this architectural pattern (see Uber for example) on their technical blogs, and many whitepapers praising the advantages related to scalability, ease of development, and adaptability to changing requirements when compared to the traditional monolith approach.

This post will attempt to describe one real-world example on how to architecture microservices with Java and SpringBoot, while taking detours where needed, either to explain particular Spring concepts or abstractions, or to frame how the example architecture matches the ideas that compose the pattern itself. This is important because it gives readers a real frame of reference as to where a real-world architecture stands when compared to the idealized, theoretical foundations of the pattern.

The differences between theory and practice are where the fun is at and understanding how some basic principles present in frameworks like Spring can actually facilitate certain types of development can pay off when looking at things from a different lens. Before we begin, we can first look at why it can be a good idea to use microservices.

Framing microservices in the modern world

Everything is fast-paced and ever evolving nowadays, and code is no exception to that. In fact, according to a post at ArsTechnica, developers claim managing 100x more code today than they did in 2010. That's a huge increase. And the tendency is to keep growing!
With more and more companies jumping on the digital transformation bandwagon, we now see developers and developer job positions at companies whose primary focus of business is not even technical: retail, insurance and even food business are now contributing to the code footprint of today.

This means that the original approach of coding in a monolith no longer scales well enough to meet the demands, and speed and innovation have become key factors for success, and here is where microservices architecture can provide that much needed competitive edge. Let's now delve into the differences between microservices and a traditional monolith approach.

Differences between a monolith and a microservices approach

When it comes to identify the differences between these two patterns, it's important to notice that these architecture patterns appeared at different times, and solved different problems for managing complexity.

Using a monolith-style approach means that all of the codebase exists and lives and grows as a single, deployable, unified unit.
There can be a clean architecture internally, each module can be well structured and written, but, the whole application and codebase can only exist as a single deployable unit.

Let's look at a shopping cart application, for example.

Monolith lens

There will be obvious pieces of logic that can be seen as orthogonal in the application: there's user management, there's a payment system, displaying lists of products, updated product stocks, the status of the shopping cart, managing the UI, etc, etc.

When developed as a monolith, we have a single system that is responsible for managing every single, orthogonal aspect of this application on its own. This implies that in a monolith scenario, if we would need to deploy a change concerning the payments system exclusively, the entire application would need to be deployed as a whole, because the several business concerns are part of a single, isolated context that can't exist as separate units.
This rapidly causes the codebase to grow to an unmaintainable state where adding new features becomes more and more costly overtime, until a breaking point is reached where the time-to-market for new features no longer outweights the time it takes to develop and maintain these additional features, effectively resulting in the death of the application.

Microservices lens

If we look at the same example as above from a new perspective, we see how there are many opportunities to structurally break the application apart in independent pieces that when looked at as a whole form the same application, but in a much more flexible way. As described above, these are some examples of some services we have to deal with:

  • Payments service;
  • User management service;
  • Shopping Cart service;
  • Product listing service;
  • (...)

Since each of these services is concerned with a specific context of the business logic, this means that the composition of all these services together will form our application.

The advantage of breaking down the monolith is that we can see our application as a set of independent running services:

PaymentsService, UserManagementService, ShoppingCartService, ProductListingService, ...

When using this setup, if we decide to make changes to the way the application handles payments, we can simply adapt the code and internal logic in the PaymentsService, we can redeploy only that particular service, and things will be now running on the updated service. It's safer, faster and more easily managed, since we only needed to concern ourselves with the piece of logic handling exactly the business domain that needed changing.

Going a bit on a side-note here, this style of thinking about microservices can be seen as a "domain-oriented microservice architecture", because each microservice will form a domain-bounded context, which means that each service will handle a specific part of the business domain.

It's also important to notice that microservices are very well suited to work in a CI/CD environment due to the fast feedback cycle and higher modularity and availability. Today, it's very easy to ship APIs as docker services via CI/CD pipelines and deliver code much faster than when using a monolith. This makes it almost a requirement nowadays to develop architectures using microservices where allowed and valid to do so.

Not every application can be architectured as microservices, but, if you can do it, then it really is the best approach.

Main components of our microservices code architecture

Now that we have seen the main advantages of using a microservices oriented architecture, it's time to start looking at the main components and abstractions we will leverage when building our own implementation of the theoretical microservices architecture model.

When designing a microservices architecture with SpringBoot and Java, some basic building blocks need to be in place. The first thing to discuss is how to actually expose the microservice architecture to be consumed by clients:

This will be done using a REST API that will expose dedicated endpoints that will then delegate the logic to our services (hence the name microservices), and return a JSON payload as the response. Like this, any client, be it a web app, a mobile app, etc, can just call the API we are exposing and retrieve responses in a JSON format that can then be displayed in a web page, mobile application, etc.

The REST API is the surface layer that interfaces between the clients and our microservices architecture, so now we need to detail how the actual microservice code will be structured.

Detailing the microservice code architecture

The architecture we will follow makes use of SpringBoot abstractions that translate very well into working production code. Let's see schematics of it first and we will go into more detail of each aspect as we go along:

Alt Text

Let's now focus on detailing the components in the image above.

Processing requests with Spring's @RestController annotation

When a request comes in from a client, the entrypoint will be what is called a "resource class". In SpringBoot, the resource class is the class that defines the endpoint URLs that will be hit when a client requests a resource.

A blueprint of such class can be as below:

@RestController
public class ShoppingCartResource {

    private final PaymentService paymentService;
    private final ProductListingService productListingService;

    public ShoppingCartResource(PaymentService paymentService,
        ProductListingService productListingService) {
        this.paymentService = paymentService;
        this.productListingService = productListingService;
    }

    @GetMapping(value = "/list-products", produces =
        MediaType.APPLICATION_JSON_VALUE)
    public ResponseEntity<List<ProductDTO>> getProducts() {
        return productListingService.getAllProducts()
            .map(product -> ResponseEntity.ok().body(product))
            .orElse(ResponseEntity.notFound().build());

    @PostMapping(value = "/checkout/{cartId}", produces =
        MediaType.APPLICATION_JSON_VALUE)
    public ResponseEntity<String> processPayment(@PathVariable String cartId) {
        return paymentService.performPayment(cartId);
 }
Enter fullscreen mode Exit fullscreen mode

There's already a lot going on at the code architecture level, even in the resource class, so let's see how it's all wired.

The class is annotated with the @RestController annotation, which means that it will tell Spring that we will be defining endpoint methods, that can be hit by clients from the outside to retrieve data using our API.

We can see that the endpoint methods are annotated with annotations like @GetMapping and @PostMapping. This means that there is a URL mapping to a particular Http verb for a specific URL. If we look at the /list-products above, we see that it is defined as a Get mapping. This means that requests made to this URL, will be of the format:

request:
Http GET https://<the production deployment URL>/list-products
Enter fullscreen mode Exit fullscreen mode

and we also declare that the response will be in JSON format. No other methods are allowed in this specific endpoint.

The reasoning for the @PostMapping endpoint would be similar.

Autowiring and injection of services in the resource class

We see above that the resource class we created has the services injected in the constructor.

In recent Spring versions, this actually does the same as having the @Autowired annotation in these classes, which means that since we are passing these services as constructor parameters to our resource class, Spring will know which services to initialize and instantiate so that everything works.

We can see that the endpoint methods, delegate all the internal logic required to the service, so, they act essentially as a "routing" layer between the client request, and the internal logic necessary to fetch the data we need to serve the client.

These are the microservices in the architectural name of "microservices": they are in essence, very small, dedicated services that do a single thing when it comes to handling a specific business domain context.

We can already see the first level of "indirection" in our architecture:

Resource classes should not know anything in terms of business logic, this should be always delegated to services, with the endpoint methods acting as "routers" between client requests and the actual data fetching/processing.

Let's now delve deeper into what constitutes a service in our architecture.

Structure of a service

As we saw before, services are the center piece of this architectural pattern, and, they are also composed of additional abstractions, as they are the components responsible for dealing with the more complex logic that will be the added value of our API.

Not only are they more complex in terms of size, complexity and logic, usually, services are also responsible for interacting with the persistence layer of our application. This means that services handle and manage accesses to the database to retrieve data and actually implement the logic we need on top of the data we need.

Services also have the characteristic of being easily composed, like pure, functional-style building blocks, we can rely on simpler services to build and extend more complex, higher-level services.
For example, suppose that on the service for listing products, we are only interested in showing in the list, the products that effectively are available in stock.

We could write all that logic as an integral part of the ProductListingService, but, that would make this service more complex to maintain, and, more importantly, the service would have two distinct responsibilities: to verify the stock for that particular product, and then prepare the data to do the listing. Instead, we can extract the logic for checking the stock into its own separate service, called for example, StockAvailabilityService that can then be reused in different contexts, since it can be useful in different contexts, like to apply promotions, or when the business grows and stock availability needs to be checked by geographical area, etc, etc. Let's look at an example of the structure of a service for listing products:

@Service
public class ProductListingService {
    private static final Log LOG = LogFactory.getLog(ProductListingService.class);

    private ProductRepository productRepository;

    public ProductUpdatingService(ProductRepository productRepository) {
        this.productRepository = productRepository;
}

public List<ProductDTO> listProducts() {
      return newArrayList(productRepository.findAll());
}
Enter fullscreen mode Exit fullscreen mode

The basic idea is that, if we have a stand-alone, reusable service, then we can focus on what it offers us in terms of business capabilities and see where exactly in the landscape of the business it can be reused or composed into higher level services. This is very valuable and it's one of the main arguments in favor of microservices.

So, we have seen that services have the responsibility of handling the persistence layer and managing data manipulation to solve specific business needs. However, services also have another very important role:

Services have the responsibility of translating the data formats back and forth between the application layer and the persistence layer

We will now look more in-depth into the actual abstraction that services use to communicate with the persistence layer, and, once that view is complete, we can zoom back out and look at the different data formats and why they are useful in our context.

Introducing the Repository pattern as an abstraction over the persistence layer

As we have seen in the example above, our service relied on interfaces that we call repositories to abstract over the persistence layer, and to give us, as programmers, a higher level abstraction over the data model being used for our application, as well as some useful methods implemented out of the box.

A repository is an interface at code level, that allows us to have a set of methods provided by SpringBoot that allows us to retrieve, update, create and delete entities out of the box, and, additionally, allows us to write our own custom queries for retrieving data in more complex ways.

An example of a repository class is the following:

@Repository
public interface ProductRepository extends CrudRepository<Product, Long> {
  //this already offers methods inherited from CrudRepository: findAll(), findById(), deleteById() and updateById()
}
Enter fullscreen mode Exit fullscreen mode

In addition to this, we can have custom methods driven by custom queries to retrieve data in more complex ways:

@Repository
public interface ProductRepository extends CrudRepository<Product, Long> {
      @Query("select p from Product p where p.name in (:productNames)")
      List<Product> retrieveProductsUsingTheirNames(@Param("productNames") List<String> productNames);
}
Enter fullscreen mode Exit fullscreen mode

We see that with the annotation above the method name specifying the query, we can retrieve data with very fine-tuned queries that meet all the possible business needs.

What is also important to note, and this will make the bridge into the next section, is that the returned type on this repository, as well as the type parameter for the CrudRepository is of type Product.

This Product represents the entity from our data model, as it is stored in the database. Obviously, this is defined via Java, as we will see in the next section.

Different data formats between the persistence layer and application layer

As seen earlier, the data formats between the database and application layer, can be different, and, even when they are fully identical, it is a good practice to abstract way the database representation into a format that is "coupled tighter together" with the application code.

Often, in the database, we can store additional fields that are required for the persistence to work, like primary keys, detail relationship of a certain type with others, format of specific fields in the database, etc.
However, when working at the application layer, that information is usually not the concern of the application itself, and we may not even want to handle all the database attributes in the application code.

To separate those concerns, the service actually "translates" between the different data representations, by using what is known as "Data Transfer Objects", or DTOs for short.

The main idea is that, if we fetch a Product from the database, it's a good plan to have a ProductDTO in place, somewhere at the service level, that we will use downstream.

Another important aspect of this separation of concerns at data level, is that we are in full control of the DTOs, so we can unit test our services by mocking the repository layer and asserting on the maintenance and construction of our DTOs, which makes these services very easy to test.

Let's look in depth to the differences between the database entity and the DTOs. We start with the database entity:

@Entity
public class Product implements Serializable {
    private long id;
    private String name;
    private long stock;
    private Supplier supplier;

    public Product() {
    }

    public Product(String name, long stock, Supplier supplier) {
        this.name = name;
        this.stock = stock;
        this.supplier = supplier;
    }

    @Id
    @GeneratedValue(generator = "product_id_seq", strategy = GenerationType.SEQUENCE)
    @SequenceGenerator(name = "product_id_seq", sequenceName = "product_id_seq", schema = "product", allocationSize = 1)
    @Column(name = "id", nullable = false, updatable = false)
    public long getId() {
        return id;
    }

    public void setId(long id) {
        this.id = id;
    }

    @ManyToOne(fetch = FetchType.LAZY)
    @JoinColumn(name = "supplier_id", referencedColumnName = "id", nullable = false)
    public Supplier getSupplier() {
        return supplier;
    }

    public void setSupplier(Supplier supplier) {
        this.supplier = supplier;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    } 

    public long getStock() {
        return stock;
    }

    public void setStock(long stock) {
        this.stock = stock;
    } 

Enter fullscreen mode Exit fullscreen mode

We can see that this class, which is annotated with the @Entity annotation, really refers to a database class. We see references to ids, database table names and many other aspects that only concern the database layer. Once more, by using annotations, we can leverage SpringBoot's capabilities of performing a lot of work for us, which is what allows to use the repository pattern described earlier.

Now, obviously, when working at the application layer, we will not be concerned with any database related entities directly.

We do need to worry in the sense that the entity-annotated class will be the interface between the repository and the database, but, that's where our concerns will end. We just know, that, given a particular repository query, we will receive back in the code an instance (or a list, or set, etc...) of this entity class.

So, in order to keep the database isolated from the rest of the code and also to make our lives easier, we can create what is called a DTO class.

We can see a DTO class as the lens through which our application "sees" the database. Application code just concerns with the exact format required by the clients of our API, and one approach to capture that requirement in our code is by using a representation that matches that exactly. That is a DTO. Here's how the DTO for a product in our example domain could look like:

public class ProductDTO {

    private String name;
    private long stock;
    private String supplierName;

    public ProductDTO() {}

    public ProductDTO(String name, long stock, String supplierName) {
        this.name = name;
        this.stock = stock;
        this.supplierName = supplierName;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public String getSupplierName() {
        return supplierName;
    }

    public void setSupplierName(String supplierName) {
        this.supplierName = supplierName;
    }

    public long getStock() {
        return stock;
    }

    public void setStock(long stock) {
        this.stock = stock;
    }
}
Enter fullscreen mode Exit fullscreen mode

This looks simpler already! And more importantly, it is also much closer to our domain and it's actually easy to work with, now that all the database baggage is out of the picture. This is crucial for flexible and maintainable code. By controlling the DTO representation entirely, we can now manage things at the application layer much easily. We can compose and extend DTOs, we can make as complex queries as we want at the database level and we know that our DTO will be able to adapt to any needs. It puts control back in our hands.

If you still recall in the beginning, our service for listing products was returning a List<ProductDTO>. This is because services care about what clients want out of the API, and repositories can worry about the database. The interface between the two worlds happens at a service level.

Let's now see exactly how to approach testing for our architecture, at a unit level and at integration level.

Considerations for testing a microservices-oriented architecture

Now that we have looked at the building blocks of our architecture in relative depth, we can address testing while following a similar approach.

Unit testing services

Let's start by looking at how we can unit test services in our setup.

We see that the service returns a DTO after manipulating a certain DB entity through a repository, so, an ideal way to unit test a service on it's own, is to mock a certain response from a repository and assert on the resulting return of the service, to ensure that its internal logic is correctly implemented. So the plan is:

  • We will mock the repository classes, and place them under our control by mocking expected responses;

  • Using this controlled data, we can also assert that our service produces the correct DTOs by asserting on its contents;

  • Assert that interactions with the repository class are correct, by verifying that the repository method is called only once;

Let's see how this looks in practice:

@DisplayNameGeneration(DisplayNameGenerator.ReplaceUnderscores.class)
@ExtendWith(MockitoExtension.class)
class ProductListingServiceTest {

    private ProductListingService productListingService;

    @Mock
    private ProductRepository productRepository;

    @BeforeEach
    void setUp() {
        this.productListingService = new ProductListingService(productRepository);

        Product product = new Product();
        product.setName("test");
        product.setStock(100L);
        product.setSupplier(new Supplier("someSupplier"));
        Product product2 = new Product();
        product.setName("test2");
        product.setStock(101L);
        product.setSupplier(new Supplier("someSupplier2"));
}

   @Test
    void listProduct_lists_correctly() {
        doReturn(newArrayList(product,product2).toIterable()).when(productRepository).findAll();
        List<productDTO> list = productListingService.listProducts();

        verify(productRepository, times(1)).findAll();
        assertThat(list.size(),2);
    }
Enter fullscreen mode Exit fullscreen mode

We can see that by mocking the repository, we achieve the isolation level we need by testing the service internal logic exclusively.

The annotation @ExtendWith(MockitoExtension.class) is used to make sure that mocks can be configured in the test context by simply annotating them with @Mock as seen above.

Like this, we test exactly the method we need from our service and can be sure that it's internal logic is well implemented.

Let's look at integration testing with Spring's MockMvc.

Using MockMvc to write integration tests

Now, we are interested in testing our API from an endpoint perspective, i.e., from the Resource layer.

In order to do that, we can use mockMvc.

MockMvc is a Spring class we can leverage to write integration tests for our endpoints. In essence, after some minimal wiring, we can simulate a request to our API endpoints, just like it would come from an external client, and assert on its return status, value and other things to test the API in a more "e2e" fashion.

To setup MockMvc we simply need some annotations in our test class, as follows:

@AutoConfigureMockMvc
@SpringBootTest
class ProductsResourceTest {

    @Autowired
    private MockMvc mockMvc;
    (...)
Enter fullscreen mode Exit fullscreen mode

We add two annotations, the @SpringBootTest that takes care of wiring for our application context, repositories and other necessary things for the application to start from within a test context, and the @AutoConfigureMockMvc that configures the MockMvc class to actually be wired correctly.

Once this is set up, we can see that the instance of mockMvc allows us to send http requests, and assert on the responses, while keeping certain external dependencies under our control.

An example of such a request could be:

@AutoConfigureMockMvc
@SpringBootTest
class ShoppingCartResourceTest {

    @Autowired
    private MockMvc mockMvc;

     @Mock
    private ProductListingService productListingService;

    @BeforeEach
    void setUp() {
        mockMvc = MockMvcBuilders.standaloneSetup(
            new ProductResource(productListingService))
            .build();
    }

    @Test
    void listProducts_with_empty_repository_returns_OK_with_empty_list() throws Exception {

        MockMvc result = mockMvc.perform(get("/list-products")
            .contentType(MediaType.APPLICATION_JSON_VALUE)
            .andExpect(status().isOk());

        assertTrue(result.andReturn().getResponse().getContentAsString().equals("{}"));
    }
Enter fullscreen mode Exit fullscreen mode

Here we can exercise the entire API from a client perspective, and, we can obviously have more complex testing setups that seed test datasources with dummy data, and we can connect the Spring application to that test context data source to simulate requests with more real world data, but, this will be a subject for a future post.

Conclusion

Hopefully, after this description of a microservices architecture, now you are better equipped to see when it is a suitable solution for your project, how to strucuture an application around microservices and also had a sneak peek about different levels of testing a microservices architecture with Spring.
Suggestions/comments welcome!

💖 💪 🙅 🚩
brunooliveira
Bruno Oliveira

Posted on October 7, 2020

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related

Architecturing Spring services
microservices Architecturing Spring services

October 7, 2020