How to implement logging in your REST service by using Elasticsearch: PART 2.B

petershekiondo

Peter Shekindo

Posted on August 8, 2022

How to implement logging in your REST service by using Elasticsearch: PART 2.B

This is the last section of this article series which explains how to implement logging in REST services using Elasticsearch.

Before continuing with this section, I advise you to go through the previous sections first in case you did not go through them. Just click the links bellow

  1. How to implement logging in your REST service by using Elasticsearch - Part 1
  2. How to implement logging in your REST service by using Elasticsearch — Part 2

In this last section, “_I promise it is the last 😉 _”. We will finish up strong 💪 with the final three steps.

Step-4 Install and configure Kibana.

Step-5 Install and configure Logstash

Step-6 Exploring logs in Kibana dashboard

Step-4 Install and configure Kibana.

According to the official documentation, you should install Kibana only after installing Elasticsearch. Installing in this order ensures to manage dependency for each component.

Because you’ve already added the Elastic package source in the previous step, you can just install the remaining components of the Elastic Stack using apt:

$ sudo apt install kibana
Enter fullscreen mode Exit fullscreen mode

Then run the command below to enable and start the Kibana service:

$ sudo systemctl enable kibana
$ sudo systemctl start kibana
Enter fullscreen mode Exit fullscreen mode

Because Kibana is configured to only listen on localhost, we must set up a reverse proxy to allow external access to it. We will use Nginx for this purpose, which should already be installed on your server. You may use any web server of your choice.

In this article, we will focus on setup Nginx to use reverse proxy only, but for more on how to configure a reverse proxy, load balancing, buffering and caching with Nginx, click the following link.

Nginx HTTP proxying.

First, use the openssl command to create an administrative Kibana user which you’ll use to access the Kibana web portal. As an example we will name this account kibanaadmin, but to ensure greater security we recommend that you choose a non-standard name for your user that would be difficult to guess.

The following command will create the administrative Kibana user and password, and store them in the htpasswd.users file. You will configure Nginx to require this username and password and read this file momentarily:

$ echo "kibanaadmin:`openssl passwd -apr1`" | sudo tee -a /etc/nginx/htpasswd.users
Enter fullscreen mode Exit fullscreen mode

Enter and confirm a password at the prompt. Remember or take note of this login, as you will need it to access the Kibana web portal.

Next, we will create an Nginx server block file. As an example, we will refer to this file as example.com, although you may find it helpful to give yours a more descriptive name. For instance, if you have a FQDN and DNS records set up for this server, you could name this file after your FQDN:

$ sudo nano /etc/nginx/sites-available/example.com
Enter fullscreen mode Exit fullscreen mode

Add the following code block into the file, being sure to update example.com to match your server’s FQDN or public IP address. This code configures Nginx to direct your server’s HTTP traffic to the Kibana application, which is listening on localhost:5601. Additionally, it configures Nginx to read the htpasswd.users file and require basic authentication.

Delete everything in the file and paste the content below. Note only do this if it is your first time to do this, if you have already configured the file you may want to update as per content below.

server {
    listen 80;
    server_name example.com;
    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/htpasswd.users;
    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}
Enter fullscreen mode Exit fullscreen mode

When you’re finished, save and close the file.

Next, enable the new configuration by creating a symbolic link to the sites-enabled directory. If you already created a server block file with the same name in the Nginx prerequisite, you do not need to run this command:

$ sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/example.com
Enter fullscreen mode Exit fullscreen mode

Then check the configuration for syntax errors:

$ sudo nginx -t
Enter fullscreen mode Exit fullscreen mode

If any errors are reported in your output, go back and double-check that the content you placed in your configuration file was added correctly. Once you see syntax is ok in the output, go ahead and restart the Nginx service:

$ sudo systemctl restart nginx
Enter fullscreen mode Exit fullscreen mode

Also, you should have a UFW firewall enabled. To allow connections to Nginx, we can adjust the rules by typing:

$ sudo ufw allow 'Nginx Full' 
Enter fullscreen mode Exit fullscreen mode

This will allow both HTTP and HTTPS traffic through the firewall. You may use 'Nginx HTTP'for HTTP and 'Nginx HTTPS' for HTTPS.

Kibana is now accessible via your FQDN or the public IP address of your Elastic Stack server. You can check the Kibana server’s status page by navigating to the following address and entering your login credentials when prompted:

http://your_serve_IP/status 
Enter fullscreen mode Exit fullscreen mode

This status page displays information about the server’s resource usage and lists the installed plugins.

drawing

Now that the Kibana dashboard is configured, let’s install the next component: Logstash.

Step-5 Install and configure Logstash

Logstash is used to process our saved log files. It collects data from different sources, transform it into a common format, and exports it to another database.

Install Logstash with this command:

$ sudo apt install logstash
Enter fullscreen mode Exit fullscreen mode

create a configuration file called input.conf, where you will set up your log source input:

sudo nano /etc/logstash/conf.d/input.conf
Enter fullscreen mode Exit fullscreen mode

Insert the following input configuration. This specifies a source file that holds all logs generated in the system. As seen in the script path => "specify the path to where your logfile is located" this specifies log file path, start_position => "beginning"this specifies Logstash should read the file from the beginning.

input {
  file {
         path => "specify the path to where your logfile is located"
    start_position => "beginning"
    sincedb_path => "/dev/null"
  }
}
Enter fullscreen mode Exit fullscreen mode

Save and close the file.

Next, create a configuration file called 30-elasticsearch-output.conf:

sudo nano /etc/logstash/conf.d/30-elasticsearch-output.conf  
Enter fullscreen mode Exit fullscreen mode

Insert the following output configuration. Essentially, this output configures Logstash to store the logged data in Elasticsearch, which is running at localhost:9200. Notice the Index => filebeat this will help us to create index pattern in the Kibana dashboard which will act as our log reference to where our logs referred from

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    Index => filebeat
  }
Enter fullscreen mode Exit fullscreen mode

Save and close the file.

Test your Logstash configuration with this command:

sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t
Enter fullscreen mode Exit fullscreen mode

If there are no syntax errors, your output will display Config Validation Result: OK. Exiting Logstash after a few seconds. If you don’t see this in your output, check for any errors noted in your output and update your configuration to correct them. Note that you will receive warnings from OpenJDK, but they should not cause any problems and can be ignored.

If your configuration test is successful, start and enable Logstash to put the configuration changes into effect:

$ sudo systemctl start logstash
$ sudo systemctl enable logstash
Enter fullscreen mode Exit fullscreen mode

Now that Logstash is running correctly and is fully configured, let’s start to review logs in Kibana dashboard.

Step-6 Exploring logs in Kibana dashboard

Let’s return to the Kibana web interface that we installed earlier.

In a web browser, go to the FQDN or public IP address of your Elastic Stack server. If your session has been interrupted, you will need to re-enter the credentials you defined in Kibana configuration steps. Once you have logged in, you should receive the Kibana homepage:

drawing

Click the Discover link in the left-hand navigation bar (you may have to click the Expand icon at the very bottom left to see the navigation menu items). On the Discover page, select the predefined filebeat-* index pattern to see logged data. By default, this will show you all of the log data over the last 15 minutes. You will see a histogram with log events, and some log messages below:

drawing

In this tutorial, you’ve learned how to install and configure the Elastic Stack to collect and analyze system logs. Remember that you can send just about any type of log or indexed data to Logstash, but the data becomes even more useful if it is parsed and structured with a Logstash filter, as this transforms the data into a consistent format that can be read easily by Elasticsearch.

To learn more about ELSK stack and other concept, you may visit this link digital ocean community.

This article has been prepared on behalf of ClickPesa. For many article like this just click this licks ClickPesa on hasnode, ClickPesa on dev.to and ClickPesa on medium.

💖 💪 🙅 🚩
petershekiondo
Peter Shekindo

Posted on August 8, 2022

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related