Elastic Search 8.x : ELK Setup over TLS/SSL
Bhargavi Chiluka
Posted on February 6, 2024
Starting from ElasticSearch V8.x, all the configurations with security runs on the self-signed certificates that are auto-generated from the installation pack itself. Hence unless there is a requirement for the private certs or public signed certs of organisation it’s recommended to use the self-signed certs for best practice.
Table of Contents:
Elastic Search
1.1. Cluster(Master Node) Installations
1.1.1.Import the Elastic search GPG Key
1.1.2.Installing from the RPM repository
1.1.3.Elastic-Search Installation output with security enabled
1.1.4.Configuration for Master Node.
1.1.5.Starting the Elasticsearch
1.1.6.Reset the password for required users.
1.2.Other (data) Node installation
1.2.1.Installation Process
1.2.2.Configuration and Connection establishment with Cluster.
1.2.3.Starting the data node and confirming the connection with cluster
1.2.4.Common Errors in connection establishment between remote systemsKibana
2.1.Import the Elasticsearch GPG Key
2.2.Installing from the RPM repository
2.3.Configuring Kibana with Cluster Node Connection
2.4.Starting Kibana and confirming connection with Cluster Configure kibana as service using
2.5 Kibana SSL Certificates for Browser Traffic
3.Logstash
3.1.Import the Elasticsearch GPG Key
3.2.Installing from the RPM repository
3.3.Configuring Logstash with Cluster Node Connection
4.Filebeat Agent
4.1 Download Filebeat
- For Kibana Web Application, the public signed certificate can be used for browser/HTTPS traffic, while self-signed certificates can be still used to establish the connection between the kibana and elastic-search cluster.
- This whole document focuses on installation and configuration happening through RedHat Linux OS, which has an RPM package manager available. Installation for the Ubuntu/Debian system may slightly differ and suggest to follow the public documentation. But the configuration and connection establishment remains the same.
1. Elastic Search:
1.1. Cluster(Master Node) Installations
1.1.1.Import the Elastic search GPG Key
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
1.1.2.Installing from the RPM repository
- Create a file called
elasticsearch.repo
in the/etc/yum.repos.d/
directory for RedHat based distributions.
[elasticsearch]
name=Elasticsearch repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=0
autorefresh=1
type=rpm-md
- Please add the
proxy=example.com
if you have any proxy setup - Install package
sudo yum install --enablerepo=elasticsearch elasticsearch
If there is any error in the above installation or trouble in downloading and installing automatically, please refer to this manual installation guide from the docs.
1.1.3.Elastic-Search Installation output with security enabled
When installing Elastic-search, security features are enabled and configured by default. When you install Elastic-search, the following security configuration occurs automatically:
Authentication and authorization are enabled, and a password is generated for the elastic built-in superuser.
Certificates and keys for TLS are generated for the transport and HTTP layer, and TLS is enabled and configured with these keys and certificates.
Please save the output generated here as it includes some password and important commands.
Ex:
-------Security autoconfiguration information-------
Authentication and authorization are enabled.
TLS for the transport and HTTP layers is enabled and configured.
The generated password for the elastic built-in superuser is : <password>
If this node should join an existing cluster, you can reconfigure this with
'/usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token <token-here>'
after creating an enrollment token on your existing cluster.
You can complete the following actions at any time:
Reset the password of the elastic built-in superuser with
'/usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic'.
Generate an enrollment token for Kibana instances with
'/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana'.
Generate an enrollment token for Elasticsearch nodes with
'/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node'.
1.1.4.Configuration for Master Node.
Please enable the following details in the master node which is also a cluster to work parallel.
- Open
9200
port for HTTP Communication. From other data/master nodes over http - Open
9300
port for Transport communication from nodes. - Update the configuration of below params in
/etc/elasticsearch/elastricsearch.yml
cluster.name:<name-for-cluster>
node.name:<unique-name-for-node-in-cluster>
http.port:9200
network.host:<DNS of system>
http.host:<DNS of system>
transport.host:<DNS of System>
cluster.initial_master_nodes: ["<node name given above>"]
Here by default
http.host
andtransport.host
are commented out or either enabled with0.0.0.0
or127.0.0.1
, please convert them to system DNS as explained in above.
- In above, transport.host is optional but if any issues raised in connecting the cluster please
- DO NOT modify any other parameters including security/certs. All are configured as required by default. Any modification may lead to issues.
1.1.5.Starting the Elasticsearch
- Running the elasticsearch as service is a suggested approach and rpm installation, by default has halfway configuration.
- Running following will tie the service to
systemctl
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
- Start / Stop the service
sudo systemctl start elasticsearch.service
sudo systemctl stop elasticsearch.service
- Logs /
systemctl
logs
If the service failed to start, the initial logs for the service can be found in
sudo journalctl --u elasticsearch
- ElasticSearch Package Logs
less /var/logs/elasticsearch/elasticsearch.log
Status Check with cURL
ElasticSearch by default runs with the security enabled and plaintext/non-secure calls are ignored by default. Requests must be hit with a certificate.
And do not hit localhost with curl as server(s) may be behind proxy and result will not be piped.
curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic https://<DNS of System>:9200
Above command will ask for “elastic” user password. Either enter the password collected from above installation output or reset the password using the next section in the document.
Expected Sample Output from the cluster
{
"name" : "Cp8oag6",
"cluster_name" : <cluster name>,
"cluster_uuid" : "AT69_T_DTp-1qgIJlatQqA",
"version" : {
"number" : "8.8.2",
"build_type" : "tar",
"build_hash" : "f27399d",
"build_flavor" : "default",
"build_date" : "DATE",
"build_snapshot" : false,
"lucene_version" : "9.6.0",
"minimum_wire_compatibility_version" : "1.2.3",
"minimum_index_compatibility_version" : "1.2.3"
},
"tagline" : "You Know, for Search"
}
1.1.6.Reset the password for required users.
There are different users required for different purposes. But majorly,
- “elastic”: root user
- “kibana” : for kibana ui
- “logstash_system”: for logstash metrics etc.
All of these username(s) password can be set/ reset by the
/usr/share/elasticsearch/bin/elasticsearch-reset-password -i -u <username>
-
-i
means its an interactive terminal session -
-u
is to define the user-name.
1.2.Other (data) Node installation:
1.2.1.Installation Process
Please follow the steps from Import the Elasticsearch GPG Key to Elasticsearch Installation output with security enabled steps.
1.2.2.Configuration and Connection establishment with Cluster.
This process involves token generation from and token configuration in the current node (which has to connect to the cluster).
- Generate the node token in the elasticsearch cluster that we set previously.
- Execute the following one in the cluster system terminal
Note:Make sure the elastic-search service is running in cluster while executing this
/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node
- Copy the token generated in the above and run the following in new node.
Note: Till the execution of the below script, DO NOT start the elastic-search service in the new node. If done, the following script will fail and it requires a fresh install of elasticsearch in the new node.
/usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token <enrollment-token>
- The above script prompts some warnings of certs dir deletion and config override. Approve the same.
- If the above script failed to run with any error, check Common Errors in connection establishment between remote systems
- If the above script ran successfully, please proceed in configuring the following parameters in new node,
/etc/elasticsearch/elastricsearch.yml
cluster.name:<cluster name given in cluster elasticsearch.yml file>
node.name:<unique-name-for-node-in-cluster>
http.port:9200
network.host:<DNS of system>
- Confirm the following things
http.host:<this parameter must be uncommented, else uncomment. But don’t edit the value>
transport.host:<this parameter must be uncommented, else uncomment. But don’t edit the value>
discovery.seed_hosts:<this value must be array with cluster DNS/IP [“”] format>
1.2.3.Starting the data node and confirming the connection with cluster
- Running the elasticsearch as service is a suggested approach and rpm installation, by default has halfway configuration.
- Running following will tie the service to systemctl
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
- Start / Stop the service
sudo systemctl start elasticsearch.service
sudo systemctl stop elasticsearch.service
- Logs/
systemctl
logs
If the service failed to start, the initial logs for the service can be found in
sudo journalctl --u elasticsearch
- ElasticSearch Package Logs
less /var/logs/elasticsearch/elasticsearch.log
- Status Node Connection status Check with Cluster cURL
- This command must be executed in the cluster.
curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic https://<DNS of Cluster System>:9200/_cluster/health?pretty
- Above command will ask for “elastic” user password. Either enter the password collected from above installation output or reset the password using the next section in the document. (Reset the password for required users.)
- Expected Sample Output from the cluster<>node configuration is,
- Please check the number_of_nodes param, it should show the count of connected nodes. (Where cluster itself is also node)
- If anything is not expected please new node logs first and cluster logs next.
{
"cluster_name" : "cluster-name",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 2, ⇐⇐⇐⇐⇐⇐⇐⇐⇐ Pay attention here, the count must increase.
"number_of_data_nodes" : 1,
"active_primary_shards" : 1,
"active_shards" : 1,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 1,
"delayed_unassigned_shards": 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch": 0,
"task_max_waiting_in_queue_millis": 0,
"active_shards_percent_as_number": 50.0
}
1.2.4.Common Errors in connection establishment between remote systems
- Cluster
elasticseach.yml
doesn’t havehttp.host
not set to DNS or it may be commented. -
9200
&9300
of Cluster are not opened, please raise a FW - Cluster
/etc/elasticsearch/certs
folder is modified or security config in yml modified. - Cluster is not running as a service.
2. Kibana
2.1.Import the Elasticsearch GPG Key
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
2.2.Installing from the RPM repository
- create a file called
kibana.repo
in the/etc/yum.repos.d/
directory for RedHat based distributions
[kibana-8.x]
name=Kibana repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
Please add the
proxy=example.com
if you have any proxy setupInstall package
sudo yum install kibana
If there is any error in the above installation or trouble in downloading and installing automatically, please refer to this manual installation guide from the docs.
2.3.Configuring Kibana with Cluster Node Connection
As explained in above details, from 8.X all elastic-search installation comes with self-signed certificates with TLS/SSL configured by default. Hence running the following scripts, before starting the Kibana would help adopt most of the settings.
- Open 5601 FW for kibana UI to work on
- < FW Details to be updated.>
- Generate the Kibana token in the cluster system. Run the following the Cluster terminal
/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana
- Copy the above token and execute the following in the kibana system.
Note: Kibana must not be started before this command. Else we may need a fresh install.
/usr/share/kibana/bin/kibana-setup –enrollment-token <token>
Above command will configure and generate the security certs for kibana<>cluster communication with access-token (not username/password) and updates the
Token
Elasticsearch host
Certs config
In the kibana yml
.
- If any errors are raised in the above config, rectify the issue using (Common Errors in connection establishment between remote systems) if not solved, go for a fresh install of kibana.
- If the above script ran successfully, please proceed in configuring the following parameters in new kibana,
/etc/kibana/kibana.yml
server.port:5601
server.name: "kibana-host-name-can-be-anything"
server.publicBaseUrl:”DNS of kibana system with port sample.com:5601” <= This can be optional
server.host:”<kibana system DNS>”
- Observe the following things, they must be auto-set with above script
elasticsearch.hosts:<this-must-be-auto-configured-with-above-script as [“cluster-ip/dns”]>
elasticsearch.serviceAccountToken:<this-must-be-auto-set-with-above-script-token>
- If kibana web-application has to serve over https and needs TLS certificates setup, please update them by storing
.crt
and.pem
files in a dir near to kiba na and update as
server.ssl.enabled: true
server.ssl.certificate: /path/to/your/server.crt ⇐⇐⇐⇐⇐ CA cert comes here
server.ssl.key: /path/to/your/server.key ⇐⇐⇐⇐⇐ PEM file comes here
2.4.Starting Kibana and confirming connection with Cluster Configure kibana as service using
sudo systemctl daemon-reload
sudo systemctl enable kibana.service
- Start/ Stop service using
sudo systemctl start kibana.service
sudo systemctl stop kibana.service
- Logs/
systemctl
logs If the kibana failed to start, the initial logs for the service can be found in
sudo journalctl --u kibana
- ElasticSearch Package Logs
less /var/logs/kibana/kibana.log
- If all FW connections are in proper order and kibana is running without any errors, open
https://<kibana-dns>:5601
. Kibana UI should be presented. Else check the logs for connection issues.
Note: We are establishing the connection with serviceToken between elastic-search cluster and kibana (this is done using Configuring Kibana with Cluster Node Connection). Hence avoid adding username and password in
kibana.yml
again as it's not required.
2.5 Kibana SSL Certificates for Browser Traffic
Collect/Purchase the SSL certificates for https traffic from browser and update the following parameters in the kibana.yml
server.ssl.enabled: true
server.ssl.key: /etc/kibana/public_certs_kibana/<cert-name>.key # key file of system from SSL
server.ssl.certificate: /etc/kibana/public_certs_kibana/<kibana-pvt-key>.pem #ca cert file
3.Logstash
3.1.Import the Elasticsearch GPG Key
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
3.2.Installing from the RPM repository
- Create a file called
logstash.repo
in the/etc/yum.repos.d/
directory for RedHat based distributions
[logstash-8.x]
name=Elastic repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
- Please add the
proxy=example.com
if you have any proxy setup - Install package
sudo yum install logstash
If there is any error in the above installation or trouble in downloading and installing automatically, please refer to this manual installation guide from the docs.
Note: If above steps of rpm auto-installation don’t work, try downloading the required version from website manually
curl -o https://artifacts.elastic.co/downloads/logstash/logstash-8.8.2-x86_64.rpm
sudo rpm -i logstash-8.8.2-x86_64.rpm
3.3.Configuring Logstash with Cluster Node Connection
For TLS/SSL secure connection between the logstash <> cluster, we need to copy the certs folder present in the cluster /etc/elasticsearch
to the logstash installation location.
- Configure the following parameters
node.name: <logstash-node-name>
path.data: /var/lib/logstash
path.config: /etc/logstash/conf.d/*.conf
path.logs: /var/logs/logstash
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: <password-set-in-cluster-for-that-user>
xpack.monitoring.elasticsearch.hosts: ["https://<cluster DNS>:9200"]
xpack.monitoring.elasticsearch.ssl.certificate_authority: [ "/etc/logstash/certs_cluster/http_ca.crt" ]
xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
xpack.monitoring.elasticsearch.sniffing: false
xpack.monitoring.collection.interval: 10s
xpack.monitoring.collection.pipeline.details.enabled: true
- Write one connection pipeline for the logstash, in
conf.d
dir, withlogstash.conf
file name
input {
beats {
client_inactivity_timeout => 1200
port => 5044
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
cacert => '/etc/logstash/certs_cluster/http-ca.crt'
hosts => ["https://0.0.0.0:9200", "https://1.1.1.1:9200", "
https://2.2.2.2:9200"]
user => user_name
password => 'password'
ilm_enabled => true
ilm_rollover_alias => logstash
}
}
Note: cacert in the above
logstash.conf
andcertificate_authority
value in thelogstash.yml
has to be the same.
- And for security reasons, it's recommended to add the CA cert details logstash-JVM keystore so that self-signed certs are accepted. Caution: else system cannot connect to the cluster at all with any amount of efforts.
- Here we can use the ca cert file we copied from the elastic search cluster
- or we can directly generate one cert using ssl connection.
- Generating on go and passing the key to keystore can be like this
- Go to the JDK bin location which is located here,
/usr/share/logstash/jdk/bin
as we need the keystore
echo -n | openssl s_client -connect <elastic-search-cluster>:9200 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > ./ca_logstash.cer
- The above one will generate a
ca_logstash.cer
file with a headless tag of BEGIN/END but only with a key. - Now run the following to copy that headless key to Java Keystore directly.
Note: The default password for keystore is
changeit
unless we modify it after installation.
<keytool-path-as-per-system>/keytool -import -alias saelk -file ca_logstash.cer -keystore /usr/share/logstash/jdk/lib/security/cacerts
- Testing the configuration
- Before running the service/process, its important to confirm that
logstash.yml
is configured properly, hence use
sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t
- This should result with
configuration Ok
else you need fix the config till you get this.
Configuration OK
[INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
- Then test the
*.conf
file with logstash as process so that server/service will not impact
Note:
conf.d/
should have owner aslogstash
and user and fix any other path ownership issues based on the error/logs.
sudo -u logstash /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/<file-name>.conf
The above one will start the logstash as a process and you should see ERROR logs with connection issues. Then it's a success.
Start the logstash service and wait for a while.
You can confirm the connection by going to Kibana UI
Left Menu => Stack-Monitoring=> Nodes ( click on “set up basic config if you don't see any”)
Then you will see all elk-nodes, kibana nodes and logstash nodes if connected.
4.Filebeat Agent
To enable monitoring for any server install file beat in the respective server
4.1 Download Filebeat
- Lightweight Log Analysis | Elastic
- Extract the filebeat zip file
- Create an user group for Filebeat using the below command
- Using Root user do the following:
sudo useradd -m filebeat -p filebeat
sudo groupadd filebeat
sudo usermod -a -G filebeat filebeat
- Change the Owner of extracted folder from root to filebeat user
chown filebeat:filebeat -R filebeat-7.8.1-linux-x86_64
- Add the following lines in
filebeat.yml
file filebeat.inputs:
filebeat.inputs:
- type: log
enabled: true
encoding: iso8859-1
paths:
- /var/log/icinga2/icinga2.log
multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}'
multiline.negate: true
multiline.match: after
tags: ["icinga-log"]
output.logstash:
hosts: ["dns_name:5044"]
5.Make filebeat up
/etc/ filebeat-7.8.1-linux-x86_64/filebeat &
- To send the logs from filebeat to logstash server raise a firewall request
Note: if systems are in same other network, then there is no need of firewall request adding firewall entry in IP tables is enough
- Destination : IP address
- Port : 5044 #default port for Logstash
- Create pipeline configuration in Logstash
- Go to path /etc/logstash/conf.d in logstash server
- Create a file using command : vi filename.conf Add the below content according to system applications :
filter {
if [log][file][path] =~ "/path/to/log/filename.log" {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:date} %{LOGLEVEL} \(%{DATA:worker_thread}\)\[%{SPACE}%{JAVACLASS:class}:%{GREEDYDATA:message_body}" }
}
}
}
output {
if "grook-pattern" in [tags] {
elasticsearch {
cacert => '/etc/logstash/elasticsearch-ca.crt'
hosts => ["https://0.0.0.0:9200", "https://1.1.1.1:9200", "https://2.2.2.2:9200"]
user => user_name
password => 'password'
ilm_enabled => true
ilm_pattern => "{now/d}-000001"
ilm_rollover_alias => grook-pattern
}
}
}
- Make sure grok pattern should match the log pattern in your server and also tags should be same both in
filebeat.yml
and infilename.conf
files - Now you can see the logs for your server in kibana.
Posted on February 6, 2024
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.