How to Set Up Distributed Tracing in Microservices with Spring Boot, Zipkin or the ELK Stack
Djomkam Kevin
Posted on August 18, 2021
Pre-requisite
- Have Basic understanding of how to set up a microservice using spring boot and spring cloud.
- Install the zipkin server
- Install Elasticsearch, logstash and kibana
Installing and running Zipkin Server
In order to install the zipkin server, there are two ways:
- If you have Java 8 or higher installed, the quickest way to get started is to fetch the Latest release as a self-contained executable jar:
curl -sSL https://zipkin.io/quickstart.sh | bash -s
java -jar zipkin.jar
- If you have docker installed, you can use the following to run the latest image directly:
docker run -d -p 9411:9411 openzipkin/zipkin
Install Elsaticsearch, Logstash, Kibana
There are also two ways to install and use the ELK stack:
- The elk stack can be run through docker by running the following commands:
docker run -d --name elasticsearch --net es -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:6.7.2
Create a file logstash.conf with the following content:
input {
tcp {
port => 5000
codec => json
}
}
output {
elasticsearch {
hosts => ["http://elasticsearch:9200"]
index => "micro-%{appname}"
}
}
Then run the following command:
docker run -d --name logstash --net es -p 5044:5044 -v ~/logstash.conf:/usr/share/logstash/pipeline/logstash.conf docker.elastic.co/logstash/logstash:6.7.2
Finally, run kibana with the following:
docker run -d --name kibana --net es -e "ELASTICSEARCH_URL=http://elasticsearch:9200" -p 5601:5601 docker.elastic.co/kibana/kibana:6.7.2
- The ELK stack can also be installed by navigating to https://www.elastic.co/downloads/ and downloading elasticsearch, logstash and kibana on the filesystem and unzipping them.
ElasticSearch
Unzip the archive
Run bin/elasticsearch (or bin\elasticsearch.bat on Windows)
Run curl http://localhost:9200/ or Invoke-RestMethod http://localhost:9200 with PowerShell
Kibana
Unzip the archive
Open config/kibana.yml in an editor
Set elasticsearch.hosts to point at your Elasticsearch instance
Run bin/kibana (or bin\kibana.bat on Windows)
Point your browser at http://localhost:5601
LogStash
Unzip the archive
Prepare a logstash.conf config file
Run bin/logstash -f logstash.conf
Building the Microservice architecture and integrating tracing
STEP 1: Building the config server with spring cloud config
To enable Spring Cloud Config feature for an application, first include spring-cloud-config-server to your project dependencies.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-config-server</artifactId>
</dependency>
Then enable running embedded configuration server during application boot use @EnableConfigServer annotation.
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.config.server.EnableConfigServer;
@SpringBootApplication
@EnableConfigServer
public class ConfigServerApplication {
public static void main(String[] args) {
SpringApplication.run(ConfigServerApplication.class, args);
}
}
By default Spring Cloud Config Server store the configuration data inside Git repository. This is very good choice in production mode, but for the purpose of this tutorial a file system backend will be enough. It is really easy to start with config server, because we can place all the properties in the classpath. Spring Cloud Config by default search for property sources inside the following locations: classpath:/, classpath:/config, file:./, file:./config.
We place all the property sources inside src/main/resources/config. The YAML filename will be the same as the name of service. For example, YAML file for discovery-service will be located here: src/main/resources/config/discovery-service.yml.
And last two important things. If you would like to start config server with file system backend you have to activate the native profile. It may be achieved by setting parameter --spring.profiles.active=native during application boot or setting it in the properties file. Set the server port by setting property server.port in bootstrap.yml file but we will use 8888. Now, all other applications, including discovery-service, need to add spring-cloud-starter-config dependency in order to enable config client.
STEP 2: Building the discovery Service with spring cloud Netflix Eureka
In order for us to set up the discovery service, we also have to include the dependency to spring-cloud-starter-netflix-eureka-server.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
</dependency>
Then you should enable running embedded discovery server during application boot by setting @EnableEurekaServer annotation on the main class.
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer;
@SpringBootApplication
@EnableEurekaServer
public class DiscoveryServiceApplication {
public static void main(String[] args) {
SpringApplication.run(DiscoveryServiceApplication.class, args);
}
}
Application has to fetch property source from configuration server. The minimal configuration required on the client side is an application name and config server’s connection settings.
spring:
application:
name: discovery-service
cloud:
config:
uri: http://localhost:8888
The configuration file discovery-service.yml should contain the below configurations and should be placed inside the config-service module. For standalone Eureka instances we have to disable the registration and fetching registry.
server:
port: 8761
eureka:
instance:
hostname: localhost
client:
registerWithEureka: false
fetchRegistry: false
serviceUrl:
defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/
STEP 3: Building Microservice using spring boot and spring cloud
Our microservice has to perform some operations during boot. It needs to fetch configuration from config-service, register itself in discovery-service and expose HTTP API. To enable all these mechanisms we need to include some dependencies in pom.xml. To enable config client we should include starter spring-cloud-starter-config. Discovery client will be enabled for microservice after including spring-cloud-starter-netflix-eureka-client and annotating the main class with @EnableDiscoveryClient. Here is the list of dependencies required for the sample microservice:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-config</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
And here is the main class of application that enables Discovery Client for the microservice.
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
@SpringBootApplication
@EnableDiscoveryClient
public class SiteServiceApplication {
public static void main(String[] args) {
SpringApplication.run(SiteServiceApplication.class, args);
}
}
Application has to fetch configuration from a remote server, so we should only provide bootstrap.yml file with service name and server URL. In fact, this is the example of Config First Bootstrap approach, when an application first connects to a config server and takes a discovery server address from a remote property source. There is also Discovery First Bootstrap approach, where a config server address is fetched from a discovery server.
bootstrap.yml
spring:
application:
name: site-service
cloud:
config:
uri: http://localhost:8888
There are not many configuration settings. Here's the application's configuration file (site-service.yml) stored on a config server. It stores only the HTTP running port and Eureka URL.
site-service.yml
server:
port: 8090
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/
Here’s the code with implementation of REST controller class.
import java.util.List;
import java.util.Optional;
import com.cinema.site.model.Site;
import com.cinema.site.repository.SiteRepository;
import org.springframework.cloud.context.config.annotation.RefreshScope;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import lombok.AllArgsConstructor;
import lombok.NonNull;
@AllArgsConstructor
@RestController
@RefreshScope
@RequestMapping("/api")
public class SiteController {
private final SiteRepository siteRepository;
@GetMapping("/sites/{userId}")
public ResponseEntity<List<Site>> getSitesByUser(@NonNull @PathVariable Long userId) {
return new ResponseEntity<>(siteRepository.findByUserId(userId) ,HttpStatus.OK);
}
}
STEP 4: Communication between microservices with spring cloud and open feign
Now, we will add another microservice (user service) that communicates with the site service. The user service needs to get the list of sites for a given user ID. That’s why we need to include additional dependency for those modules: spring-cloud-starter-openfeign. Spring Cloud Open Feign is a declarative REST client that uses Ribbon client-side load balancer in order to communicate with other microservice.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>
The alternative solution to Open Feign is Spring RestTemplate with @LoadBalanced. However, Feign provides more elegant way of defining client, so I prefer using it instead of RestTemplate. After including the required dependency we should also enable Feign clients using @EnableFeignClients annotation.
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.openfeign.EnableFeignClients;
@SpringBootApplication
@EnableDiscoveryClient
@EnableFeignClients
public class UserServiceApplication {
public static void main(String[] args) {
SpringApplication.run(UserServiceApplication.class, args);
}
}
Now, we need to define a client interface. Because user-service communicates with site-service we should create an interface. Every client’s interface should be annotated with @FeignClient. One field inside annotation is required – name. This name should be the same as the name of target service registered in service discovery. Here’s the interface of the client that calls endpoint GET /api/sites/{userId} exposed by user-service.
import java.util.List;
import org.springframework.cloud.openfeign.FeignClient;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
@FeignClient(name = "site-service", fallbackFactory = SiteClientFallbackFactory.class)
public interface SiteClient {
@GetMapping("/api/sites/{userId}")
List findAllByUser(@PathVariable(value="userId") Long userId);
}
Sometimes we want to create a fallback method to be executed if the feign client is not able to reach the target service. SiteClientFallbackFactory helps in archiving that.
import java.util.ArrayList;
import java.util.List;
import org.springframework.stereotype.Component;
import feign.hystrix.FallbackFactory;
import lombok.extern.slf4j.Slf4j;
@Component
@Slf4j
public class SiteClientFallbackFactory implements FallbackFactory<SiteClient> {
@Override
public SiteClient create(Throwable cause) {
return new SiteClient() {
@Override
public List findAllByUser(Long id) {
log.error(cause.getMessage(), cause);
return new ArrayList<>();
}
};
}
}
Finally, we have to inject the Feign client’s beans to the REST controller through the service. Now, we may call the methods defined inside SiteClient, which is equivalent to calling REST endpoints.
import java.util.List;
import java.util.Optional;
import com.cinema.user.client.SiteClient;
import com.cinema.user.model.User;
import com.cinema.user.repository.UserRepository;
import org.springframework.stereotype.Service;
import lombok.AllArgsConstructor;
@AllArgsConstructor
@Service
public class UserServiceImpl implements UserService {
private final UserRepository userRepository;
private final SiteClient siteClient;
@Override
public List<User> findAll() {
return userRepository.findAll();
}
@Override
public List findAllSitesByUser(final Long userId) {
return siteClient.findAllByUser(userId);
}
}
import java.util.List;
import java.util.Optional;
import com.cinema.user.model.User;
import com.cinema.user.service.UserService;
import org.springframework.cloud.context.config.annotation.RefreshScope;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import lombok.AllArgsConstructor;
import lombok.NonNull;
@AllArgsConstructor
@RestController
@RefreshScope
@RequestMapping("/api")
public class UserController {
private final UserService userService;
@GetMapping("/users")
public ResponseEntity<List<User>> getUsers() {
return new ResponseEntity<>(userService.findAll(), HttpStatus.OK);
}
@GetMapping("/users/sites/{userId}")
public ResponseEntity<List> getUserSites(@PathVariable("userId") Long id) {
Optional<User> user = userService.findOne(id);
if(user.isPresent())
return new ResponseEntity<>(userService.findAllSitesByUser(id), HttpStatus.OK);
else
return new ResponseEntity<>(HttpStatus.BAD_REQUEST);
}
}
STEP 5: Building API gateway using spring cloud Netflix Zuul (edge-service)
Spring Cloud Netflix Zuul is a Spring Cloud project providing API gateway for microservices. API gateway is implemented inside module edge-service. First, we should include starter spring-cloud-starter-netflix-zuul to the project dependencies.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-zuul</artifactId>
</dependency>
We also need to have discovery client enabled, because edge-service integrates with Eureka in order to be able to perform routing to the downstream services.
spring:
application:
name: edge-service
cloud:
config:
uri: http://localhost:8888
Here's the application's configuration file (edge-service.yml) stored on a config server. It stores only the HTTP running port and Eureka URL.
server:
port: 8190
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.autoconfigure.liquibase.LiquibaseProperties;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.netflix.zuul.EnableZuulProxy;
@SpringBootApplication
@EnableDiscoveryClient
@EnableZuulProxy
@EnableConfigurationProperties({LiquibaseProperties.class, ApplicationProperties.class})
public class EdgeServiceApplication {
public static void main(String[] args) {
SpringApplication.run(EdgeServiceApplication.class, args);
}
}
STEP 6: Correlating Logs between different microservices using spring cloud sleuth and zipkin
Correlating logs between different microservice using Spring Cloud Sleuth is very easy. In fact, the only thing you have to do is to add starter spring-cloud-starter-sleuth to the dependencies of every single microservice and gateway.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>
In order to configure zipkin, add the dependency below to every microservice's pom.xml file
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-zipkin</artifactId>
</dependency>
Then add the following to the yml file of each microservice in the config server:
spring:
zipkin:
baseUrl: http://localhost:9411/
sleuth:
sampler:
probability: 1
Assuming the zipkin server is responding on localhost at port 9411
STEP 7: Configuring microservices to send logs to logstash
Sending microservice logs to logstash requires the following dependencies to be added to each and every microservice.
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>5.3</version>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-core</artifactId>
<version>1.2.3</version>
</dependency>
The next configuration to add is to create a file called logback.xml in resource folder of every microservice with the following contents:
<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="false">
<include resource="org/springframework/boot/logging/logback/base.xml"/>
<appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>localhost:5044</destination>
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<mdc/>
<context/>
<version/>
<logLevel/>
<loggerName/>
<message/>
<pattern>
<pattern>
{
"appName": "site-service"
}
</pattern>
</pattern>
<threadName/>
<stackTrace/>
</providers>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="CONSOLE"/>
<appender-ref ref="logstash"/>
</root>
<logger name="org.springframework" level="INFO"/>
<logger name="com.cinema" level="INFO"/>
</configuration>
The steps outlined above if followed diligently will enable you to put in a place distributed tracing in your micro services architectures and be able to visualise your logs through kibana and search through them using elasticsearch.
Posted on August 18, 2021
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.
Related
August 18, 2021