Management GUI for Elasticsearch
Ash Wu
Posted on December 18, 2018
Preface
Elasticsearch is a developer-friendly, with minimal configuration & manual management. But sometimes we still want to know more about our cluster.
You can fetch these information from elasticsearch APIs, so if you're a elasticsearch API ninja you can skip this article.
Here are some tools that I prefer, to help me to quickly get the overview of the cluster.
Prerequisite
Usually our cluster is located in private VPC, so we have to port-forward to our local machine.
What I usually do is to forward the kubernetes API to my local machine, and use kubectl port-forward
to handle the rest.
# I modified the kubeconfig to use port 16443 for remote environments
$ ssh -L 16443:127.0.0.1:6443 ssh-jumper
# port-foward elasticsearch to localhost
$ kubectl -n logging port-forward elasticsearch-data-0 9200:9200
# check if it's ready
$ curl localhost:9200
ElasticHQ
ElasticHQ is an open source application that offers a simplified interface for managing and monitoring Elasticsearch clusters.
$ git clone https://github.com/ElasticHQ/elasticsearch-HQ.git
$ cd elasticsearch-HQ
# Python 3 is required.
$ sudo pip3 install -r requirements.txt
$ python3 application.py
# Access HQ with: http://localhost:5000
# If you're using docker version, access ES via host.docker.internal:9200
In this page we can check the cluster load, free space & heap usage. Also check the size of each indices, document count and total storage size.
Another goodie is the Diagnostics
tab. HQ will highlight those metrics with potential risk and give you some advices. This can be a reference when you're troubleshooting or doing performance tuning on your elasticsearch cluster.
Cerebro
https://github.com/lmenezes/cerebro
# Java 1.8 or newer is required. brew cask install java
# Download the latest tarball from https://github.com/lmenezes/cerebro/releases/latest
$ wget https://github.com/lmenezes/cerebro/releases/download/v0.8.1/cerebro-0.8.1.tgz
$ tar zxvf cerebro-0.8.1.tgz
$ cd cerebro-0.8.1
$ ./bin/cerebro
# Open cerebro with http://localhost:9000
# If you're using docker version, access ES via host.docker.internal:9200
I usually use Cerebro to observe the index shards allocation. It's clear and intuitive.
And the cluster settings
, aliases
and index templates
under more menu come handy when you need them.
Posted on December 18, 2018
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.