How to Set-Up a load-balanced Elasticsearch Cluster on AWS

How to Set-Up a load-balanced Elasticsearch Cluster on AWS with Auto-Scaling and Zone-Awareness

This guide will show you how to set up a three node Elasticsearch cluster on AWS.

The cluster will be:

  1. Load-balanced
  2. Able to auto scale (in case you need more nodes, or you need to recover a failed instance)
  3. Deployed in multiple Availability Zones (AZ) with zone-awareness. Note that Multi-Zone (AWS site) and Zone-Awareness (Elasticsearch concept) are two different things, although related in this case

For simplicity, all nodes are going to be both master nodes (master eligible) and data nodes.

Unlike the Elasticsearch Service offered by AWS, our cluster will reside in the VPC.

In other words, Amazon’s own offering is not integrated with the VPC, because you can’t do simple things like create a security-group for your cluster, put the cluster in a subnet of your choice, or directly access EBS snapshots. Compare this to another service that is integrated with the VPC, such as RDS (a database service).

Why Not AWS Elasticsearch Service?

Why should you even bother setting up your own cluster? More flexibility (i.e.i, you leave yourself with a wider range of options). By setting up your own cluster, you will have the option to do the following:

  • Choose newer EC2 instances. For example: An M4 instead of an M3; usually newer generations are less expensive.
  • Choose the type of EBS volume. At the time of writing, AWS’s service limits you to instance storage, General Purpose SSD, or Magnetic. This is fine, but you may need some of the higher-class volumes.
  • Take EBS snapshots; however, often you like to directly access them. With AWS’s service, you can only choose when to take them (at most once a day), and if you want to restore a snapshot, you have to contact support!
  • Install and configure (only) the plugins you need. AWS’s service comes with a few plugins by default, but also comes with a strong limitation to which plugins you can install. In particular, you can’t install any! If your life depends on a particular plugin (for example, we REALLY needed the polish analysis-stempel plugin), this fact alone may determine your choice.
  • Install a newer version of Elasticsearch. AWS’s service limits you to 1.5 or 2.3. We use 2.4.2, but what if you wanted to deploy 5.0?
  • Place your cluster in an Auto Scaling group. AWS’s service does not do auto scaling.
  • Place your cluster inside the VPC. This also means finer-grained control on security aspects of your cluster. AWS only lets you choose between: Public (ehmm); range of (public) IPs; and IAM role (requires request signature)
  • Use the Elasticsearch native client. With AWS, you are limited to the HTTP protocol; this means you either have to write your own client or use a community-provided client. Both of these options may not look appealing to you. Especially if you are migrating your cluster and you are already relying on the native binary protocol.

Despite its downfalls, the primary feature of the AWS service is that it is easy to implement on your own. Whenever you change the cluster configuration (i.e., number of nodes, instance types, EBS volumes), the cluster will replicate itself, apply the change, and then delete the extra instances. In my experience, this behavior comes with more limitations than benefits, but in abstract it’s a cool feature to have.

Before we start – AMI Preparation

We are going to create an AMI that has been configured to run Elasticsearch. This is a necessary step to ensure the ability auto scale.

The AMI must primarily do two things:

  1. Auto-configure Elasticsearch (pre-installed).
  2. Mount an optimized EBS volume to hold Elasticsearch data (please note that if you can afford to do everything on local storage performance will improve).

This guide assumes you have Java 7 or 8 and the AWS CLI (AWS command line tools) installed on your machine, and you are following this guide on a Debian-like distro.

AMI – Install Elasticsearch

Download Elasticsearch 2.4.2 and install it as a service:

sudo dpkg -i elasticsearch-2.4.2.deb

Enable Elasticsearch as a service:

sudo update-rc.d elasticsearch defaults 95 10
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable elasticsearch.service

Increase the number of files Elasticsearch can open to avoid the “Too many open sockets” error, and make sure Elasticsearch gets all the memory it needs.

Add the following lines to /etc/security/limits.conf:

elasticsearch soft nofile 65536

elasticsearch hard nofile 65536

elasticsearch soft memlock unlimited

elasticsearch hard memlock unlimited

Modify /usr/lib/systemd/system/elasticsearch.service; find the line configuring LimitMEMLOCK (if it’s not there, add it) and change it to be the following, uncommenting if necessary:


Please note that memory lock will likely NOT work without the settings above.

Enable memory lock in Elasticsearch. Modify /etc/elasticsearch/elasticsearch.yml and add (or uncomment) the bootstrap.memory_lock property and set it to true:

bootstrap.memory_lock: true

Now it’s time to verify the settings. Reload the daemon and restart Elasticsearch:

sudo systemctl daemon-reload
sudo service elasticsearch restart

Now, if you call the _nodes/process API, you should get a JSON result with the “mlockall” field set to true.

If this is not the case, you have a problem. Make sure you didn’t skip any of the previous steps.

curl "localhost:9200/_nodes/process?pretty"

You should see something like this (note mlockall:true):


"cluster_name": "mon101",

"nodes": {

"FK5raQDlSOCUoMNqM9YmOw": {

"name": "Yukon Jack",

"transport_address": "",

"host": "",

"ip": "",

"version": "2.4.2",

"build": "161c65a",

"http_address": "",

"attributes": {

"zone": "us-east-1d"


"process": {

"refresh_interval_in_millis": 1000,

"id": 777,

"mlockall": true



// other nodes here


Install the Cloud-AWS plugin, which is required for node discovery to work. EC2 instances will not be able to “see” each other if you skip this step. In the Elasticsearch home directory, run:

sudo /usr/share/elasticsearch/bin/plugin install cloud-aws

Prepare an EBS Volume to hold Elasticsearch data

Attach an empty EBS volume (consider at least a General Purpose SSD) and format it.

Suppose said volume is at /dev/xvdb:

sudo mkfs -t ext4 /dev/xvdb

Create a mount point (/mnt/elasticsearch, for example) and mount the formatted volume:

sudo mount -t ext4 /dev/xvdb /mnt/elasticsearch

Add an entry to /etc/fstab:

/dev/xvdb /mnt/elasticsearch ext4 defaults,nofail 0 2

Make sure the user “elasticsearch” can read and write to the volume. Otherwise Elasticsearch will fail to start.

chown -R elasticsearch:elasticsearch /mnt/elasticsearch

Tell elasticsearch to write data to your volume. Set the property in the elasticserach.yml file:

# path to your volume, in this example suppose you have mounted your ebs volume to /mnt/elasticsearch
# and have created a "data" directory where all you indices will be stored. /mnt/elasticsearch/data

Verify that Elasticsearch is still able to start after this change. Elasticsearch logs are found at /var/log/elasticsearch, by default.

Now that the basics are in place, we are ready to configure some other important properties in elasticsearch.yml

Refining Elasticsearch Configuration

Create a copy of elasticsearch.yml to use as a template (that will be dynamically populated) and replace the original configuration at boot time.

Note: In the following steps, all names within a % symbol are to be considered placeholders for values that will be replaced at boot time.

Give your cluster a name (if you chose not to create an AMI, remember this must be the same on ALL your nodes): LinuxAcademy

Configure zone-awareness. This influences the way Elasticsearch distributes data across your nodes and will give you better availability.

# the value "zone" is arbitrary, but it makes sense to call it zone

cluster.routing.allocation.awareness.attributes: zone

# this will be replaced at boot-time %AVAILABLE_ZONES%

# this is one of the values from %AVAILABLE_ZONES% %ATTR_ZONE%

Configure the network interfaces on which Elasticsearch will listen:

# _local_ is important so that you can ssh to the node and curl localhost

# _site_ will correspond to the node ip, this is better than using _eth0_ (for example)

# as the name of network interfaces can change between EC2 instance types [_site_, _local_]

Give Elasticsearch an initial list of master-eligible nodes. We set up a cluster of three nodes in this example, so we also require that at least 2 nodes are healthy. To prevent split-brain, this value of minimum_master_nodes should always be set to the majority of nodes: [%MASTER_NODES%]
discovery.zen.minimum_master_nodes: 2

Remember to include in your template all the settings discussed above:

bootstrap.memory_lock: true /mnt/elasticsearch/data

Auto Configure at Boot Time

Now we have a template that we can use whenever a node starts (for example, in response to an auto scaling event), so we can write a script that obtains the data needed and replaces the placeholders. We can call this script “”. You will need to create a cron job

that will run it only once at boot time:

crontab -e

@reboot <aboslute_path>/

Please note that this is just an example script and not robust enough for use in most production environments.

First, we need to get a list of Availability Zones available to us (they change between AWS accounts):

# obtain zones

zones=`"aws ec2 describe-availability-zones --region us-east-1 --output text | awk '{print $4}'`

# transform to comma-separated list

zones=`echo $zones | sed -e 's/ /,/g'`

echo "Known zones ${zones}"

The string obtained is going to replace the %AVAILABLE_ZONES% placeholder. For example:

# my_template is the template created before
# transformed_template is the output, after replacing the placeholder
# i won't repeat this step for similar replacements, you got the idea
sed -e "s/%AVAILABLE_ZONES%/${zones}/g" my_template > transformed_template.yml

Similarly, we will replace %ATTR_ZONE%; to get the zone in which the node is running, do:

zone=`curl -s`

You can obtain the list of master nodes (to replace %MASTER_NODES%) in several ways. One way is to tag your EC2 instances (you can have the auto-scaling group tag them)

and then getting the IPs using the AWS CLI. For example, say all your master eligible nodes are tagged with ClusterRole=master. In that case, you could do:

# filter "master" nodes
# filter "running" nodes
# get private ips addresses
ip_addresses=`aws ec2 describe-instances --region us-east-1 --filters "$f_master" "$f_running" --output text --query "Reservations[].Instances[].PrivateIpAddress"`

Alternatively, to provide a list of master-eligible node IPs, you might find easier to use the DNS name of your Elastic Load Balancer.

I actually recommend this alternative in a production setting, so you don’t depend on a list of IPs that may become invalid over time (and fail).

We also want to make sure elasticsearch gets half of the total available RAM, we will need to change the ES_HEAP_SIZE in /etc/default/elasticsearch.

I prefer to do it dynamically since the available RAM changes depending on the EC2 instance type. For example, you could do:

# obtain all ram available
total_ram=`free -g | awk 'FNR > 1 && FNR < 3 {print $2}'`
# if it's an odd value add one
if [ $((total_ram%2)) -ne 0 ];
# divide by two
half_ram=$(($total_ram / 2))

You can replace this value in /etc/default/elasticsearch using sed (or whatever suits you).

You also want to be sure some other memory settings in /etc/default/elasticsearch are set appropriately:


Now add a line to restart Elasticsearch to your script, and you are done. You can stop the EC2 instance you have been using so far and create an image from it (AMI).

NOTE: Make sure that the EBS volume is empty (no indices) when you create the AMI, as having it prepopulated may cause the cluster to fail to allocat shards when it starts again.

AWS – Final Plumbing

With a working AMI, you can now create a launch configuration and an Auto Scaling group in AWS. You need a launch configuration to tell AWS which AMI and EC2 instance type to use.

I recommend you put your EC2s in private subnets, so make sure you have at least 2 private subnets in 2 different AZs. You’ll also need to attach a NAT Gateway to the subnets since our EC2 instances need to be able to communicate with the internet (they use AWS CLI APIs).

If you followed the trick above (tagging EC2 instances), remember to also tell your AS-group to tag your instances. You may configure a scaling policy or leave the group to a fixed size of three (sort of self-healing). This is up to you.

Now create an internal balancer (ELB) and have it serve traffic to the Auto Scaling group.

Any node in the cluster is a data node, and you can effectively load balance the cluster in this way.

You can also create a security group to only allow access to your cluster from the ELB, and create a second group to only allow access to the ELB’s other services in your cloud. In any case, you defintely don’t want to leave your cluster public.


Monitoring is beyond the scope of this guide, but I recommend you install the Marvel Plugin (with Kibana). You need a license to run it, but you can just request a basic license (free) if you don’t need the full package. The basic plugin gives you fundamental metrics that are enough for most use cases.

By default, AWS gives you CloudWatch metrics on cluster state (red, yellow, green).

This is definitely something you need to receive alerts. You’ll find it easy enough to reproduce this behaviour by creating a cron job that polls the cluster state (_cat/health).

A Word of Warning on Auto Scaling

You may expect Auto Scaling to do some magic and infinitely scale your cluster so you never have to think about it. Unfortunately, Elasticsearch doesn’t work like that. There is a hard limit

to scalability determined by the number of primary shards. As a rule of thumb, you can scale writes by adding more nodes as long as there are shards in excess of the number of nodes.

So, if you have four shards and three nodes, you can scale writes up to four nodes (the fifth node will not contribute to scale in capacity). The number of primary shards cannot be changed after cluster creation, so this is a variable to assign carefully. Alternatively, you can always scale reads, because no matter how many primary shards you have, you can always add more replicas (read only) as you increase the number of nodes.


You should be convinced by now that setting up your own cluster is not overly difficult, and it brings some advantages that may justify the choice, despite the “safer” click-a-button-start-the-cluster alternative. There are other important choices to make that are independent on whether you use AWS Elasticsearch service or you do it yourself. These are choices that can affect cluster performance, scalability, and reliability. For example, how many shards, how many replicas, how many nodes, settings fine tuning, choosing a recovery strategy, dedicated master nodes, and so on, that haven’t been covered in this guide.

Looking for team training?