How To Install Elasticsearch on AlmaLinux

By Chandrashekhar Fakirpure

Updated on Feb 06, 2024

In this article, we'll explain how to install Elasticsearch on AlmaLinux 9. We'll perform CRUD operation using RESTful API.

Elasticsearch is a distributed, RESTful search and analytics engine capable of addressing a growing number of use cases. Elasticsearch lets you perform and combine many types of searches — structured, unstructured, geo, metric — any way you want.

Install Elasticsearch on AlmaLinux 9

Prerequisites:

  • A KVM VPS or dedicated server with AlmaLinux 9 installed.
  • A root user access or normal user with sudo privileges.

1. Keep the server updated:

dnf update -y

2. Install Elasticsearch

Elasticsearch is not available in default package repositories. We need to add a repo of Elasticsearch and install it.Before we begin to install Elasticsearch, first we need to import the Elasticsearch GPG Key.

Download and install the public signing key using following command:

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch


Error: 

If you get an error like warning: Signature not supported. Hash algorithm SHA1 not available.

Red Hat Enterprise Linux 9 (RHEL 9) deprecated SHA-1 for signing for security reasons, it is still used by many for signing packages.

Explicitly allow the SHA-1, run following command: 

update-crypto-policies --set DEFAULT:SHA1


But please don't forget to switch back but do it after the installation command:

update-crypto-policies --set DEFAULT


For more information about it, visit Redhat official documentation.

Next, create a file called elasticsearch.repo in the /etc/yum.repos.d/ directory. 

sudo vi /etc/yum.repos.d/elasticsearch.repo


And add following lines:

[elasticsearch]
name=Elasticsearch repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=0
autorefresh=1
type=rpm-md


Save and exit.

And your repository is ready for use. You can now install Elasticsearch with the following command:

sudo dnf install --enablerepo=elasticsearch elasticsearch -y


The Elasticsearch installation include Security autoconfiguration information, where you can find superuser password which is auto generated.

--------------------------- Security autoconfiguration information ------------------------------

Authentication and authorization are enabled.
TLS for the transport and HTTP layers is enabled and configured.

The generated password for the elastic built-in superuser is : Joc+V2*rX-4iG+11Ce4Z

---


Make a copy of the password for future reference.

3. Configure Elasticsearch

Elasticsearch has three configuration files:

  • elasticsearch.yml for configuring Elasticsearch
  • jvm.options for configuring Elasticsearch JVM settings
  • log4j2.properties for configuring Elasticsearch logging

For this demonstration purpose we are configuring elasticsearch.yml file. The elasticsearch.yml file provides configuration options for your cluster, node, paths, memory, network, discovery, and gateway. Those options are preconfigured but if you have custome requirement, you can modify the options according to you requirements. 

Here we are modifying single option that is network.host. This is suitable for single server configuration. Edit the file using following command:

sudo vi /etc/elasticsearch/elasticsearch.yml


Find: network.host and uncommment it (remove "#") and set the value "localhost". Just like shown below.

# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: localhost

---


Save and exit.

Start and enable Elasticsearch service using following command:

sudo systemctl start elasticsearch && sudo systemctl enable elasticsearch


3. Test Elasticsearch

We can test that our Elasticsearch node is running by sending an HTTPS request to port 9200 on localhost:

curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic https://localhost:9200

Ensure that you use https in your call, or the request will fail.

--cacert - Path to the generated http_ca.crt certificate for the HTTP layer.

Enter the password for the elastic user that was generated during installation, which should return a response like this:

{
  "name" : "elasticsearch.hostnextra.com",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "6OL9JsTaRRWtTxK8B4_GcA",
  "version" : {
    "number" : "8.7.1",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "f229ed3f893a515d590d0f39b05f68913e2d9b53",
    "build_date" : "2023-04-27T04:33:42.127815583Z",
    "build_snapshot" : false,
    "lucene_version" : "9.5.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}

4. Uses Elasticsearch

Let's add first data. It responses CRUD commands using RESTful API. Use following command to add the data:

curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic -X PUT "https://localhost:9200/node/_doc/my-index-000001?pretty" -k -H 'Content-Type: application/json' -d '{"counter" : 1, "tags" : ["Hello World"]}'


Here we are using curl command followed by PUT to add data. Using cURL command, we have sent HTTP PUT request to the Elasticsearch server. In URI you can find node/_doc/my-index-000001?pretty. 

  • node: is the index of the data. (Required, string) Name of the index you wish to create. 
  • _doc: is the type.
  • my-index-000001: is the ID of the data
  • pretty: When appending pretty to any request made, the JSON returned will be pretty formatted (use it for debugging only!). Another option is to set ?format=yaml which will cause the result to be returned in the (sometimes) more readable yaml format.

Enter the password for the elastic user that was generated during installation, which should return a response like this:

{
  "_index" : "node",
  "_id" : "my-index-000001",
  "_version" : 1,
  "result" : "created",
  "_shards" : {
    "total" : 2,
    "successful" : 1,
    "failed" : 0
  },
  "_seq_no" : 1,
  "_primary_term" : 1
}


Next, we can retrive the data using HTTP GET request. Execute following command:

curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic -X GET "https://localhost:9200/node/_doc/my-index-000001?pretty" -k -H 'Content-Type: application/json'


Enter the password for the elastic user that was generated during installation, which should return a response like this:

{
  "_index" : "node",
  "_id" : "my-index-000001",
  "_version" : 1,
  "_seq_no" : 1,
  "_primary_term" : 1,
  "found" : true,
  "_source" : {
    "counter" : 1,
    "tags" : [
      "Hello World"
    ]
  }
}


Now, update the data using HTTP PUT request. Execute the following command:

curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic -X PUT "https://localhost:9200/node/_doc/my-index-000001?pretty" -k -H 'Content-Type: application/json' -d '{"counter" : 1, "tags" : ["Hello HostnExtra"]}'


Enter the password for the elastic user that was generated during installation, which should return a response like this:

{
  "_index" : "node",
  "_id" : "my-index-000001",
  "_version" : 2,
  "result" : "updated",
  "_shards" : {
    "total" : 2,
    "successful" : 1,
    "failed" : 0
  },
  "_seq_no" : 2,
  "_primary_term" : 1
}


Once we update the record, it automatically update the _version too.

Finally, Let's delete the data using HTTP DELETE request. Execute the following command:

curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic -X DELETE "https://localhost:9200/node/_doc/my-index-000001?pretty" -k -H 'Content-Type: application/json'


Enter the password for the elastic user that was generated during installation, which should return a response like this:

{
  "_index" : "node",
  "_id" : "my-index-000001",
  "_version" : 3,
  "result" : "deleted",
  "_shards" : {
    "total" : 2,
    "successful" : 1,
    "failed" : 0
  },
  "_seq_no" : 3,
  "_primary_term" : 1
}


See the result shows deleted. 

That's it, we have seen how to install Elasticsearch on AlmaLinux 9 as well as CRUD operation commands. We have created, read, updated and deleted the data.