Before diving into a new technology, you probably want to first try it out on your local setup. But sometimes, installation can be time-consuming and tedious, preventing you from ever getting started. When it comes to trying out an ELK Stack in your experiments, this is often the case. Such experiments can include checking the look and feel of ELK, how it behaves while providing stats about your system, and how well it parses the system logs into its dashboards as well as testing your application’s performance locally on your system before taking it to production.

In this how-to guide, we’ll show you just how quickly you can start running an ELK Stack on your system using Docker Compose, which allows you to deploy the whole stack with a single command. We’ll also explore how to secure your ELK Stack with the X-Pack and ReadonlyREST plugins, giving you an idea of how to take the ELK Stack to your production environment.

A Quick Introduction to ELK

ELK consists of three open-source products: Elasticsearch, Logstash, and Kibana. If you think this combination is easy and quick to set up, think again. Theoretically, it looks simple, but in reality, there are many potential challenges during the installation process.

First off, it’s recommended to use the same version for Elasticsearch, Logstash, Kibana, and Beats to avoid running into compatibility issues. Second, most configuration changes or service restarts require ROOT access, but you may not have the ROOT permission. Third, if you somehow avoid these two issues and are able to install the ELK Stack, you might still struggle with understanding the log query language used in ELK’s configuration files, especially in Logstash, and will need to educate yourself on this before beginning.

We wrote this guide to help you with such challenges. So, let’s begin.

Deploying ELK Stack with Docker Compose

To install Docker on your systems, follow this official Docker installation guide.

As mentioned earlier, we’re using Docker Compose to install the ELK Stack, so it’s a good idea to review the Docker Compose prerequisites, which depend on your operating system.

We’re also using docker-compose_1.yml file.

To start the ELK Stack, run the following command. In the command, “-f” specifies the alternate compose file, and “-d” runs containers in the background.

docker-compose -f docker-compose_1.yml up -d
Screenshot A


Note: We haven’t used the official ELK Docker Compose file because for this example, we don’t need the replica (multiple instances of a container), volumes (a shared directory on the host ), etc. in our setup.

How the Nginx Container Interacts with ELK Stack Containers

If you want to forward logs to ELK Stack containers on a host from a Docker container, the containers need to be linked. Here’s how to create a communication channel between the Nginx container and ELK Stack containers.

We have created a common network (ror_elk_Network) for all Docker containers in our Docker Compose file. This is because we tested the whole setup on Mac, and host networking drivers are not currently supported for Mac. Check out Docker’s “Use host networking” page to learn more.

Back to our example. By having a common network, all the containers will be launched inside that network. Plus, they will have IP addresses assigned from a similar IP range. So, if you want your Nginx container to interact with your ELK Stack, you have to start it in the same network (ror_elk_Network). To do this, first pull an Nginx image from Docker Hub, as shown below:

docker pull nginx

Then run the following command to start the process of the Nginx container joining the specified network (ror_elk_Network):

docker run -p 82:82 -h ror_nginx -it -d --network=ror_elk_Network --name ror_nginx nginx

Next, go inside the Nginx container and try pinging the Elasticsearch container. The Nginx container should be able to ping the Elasticsearch container, and vice versa. You can also check that both have the same IP range.

Screenshot B


Screenshot C


Log Forwarding Using the Filebeat

In order to forward the Nginx logs to ELK Stack, we will use the Filebeat module, a log shipping agent responsible for collecting the logs from the host where it is installed. We chose this module because it allows us to directly send the parsed logs to Elasticsearch for indexing.

First, install the module on the Nginx container. Then configure it to ship the logs to Elasticsearch. To do this, follow these steps:

Screenshot D


  • Change the Nginx port in /etc/nginx/conf.d/default.conf to 82, as we have exposed port 82 for our Nginx container.
  • Configure the Filebeat configuration file according to your needs. Within the Nginx container, open the file /etc/filebeat/filebeat.yml, and change the below entries:
- type: log
  enabled: true
  - /var/log/nginx/*.log
    host: "ror_kibana:5601"
    hosts: ["ror_elasticsearch:9200"]


Note: You can change the Kibana and Elasticsearch hostname/IPs according to your setup.

  • Now, load the Kibana dashboards and start the Filebeat. To do this, run the following commands:
    • filebeat setup
    • service filebeat start

Screenshot E shows that Filebeat has been started successfully.

Screenshot E


Viewing the Parsed Logs in Kibana Dashboard

Now that your Filebeat is up and running, check if you can view the parsed logs in your Kibana dashboard. To do this, follow these steps:

  • Open your browser, type in localhost:5601, and hit “enter.” You will see something like Screenshot F, which is basically the homepage of Kibana UI.
Screenshot F


  • Click on the Discover tab, located on the left side of the menu. You will see the Nginx logs collected by your setup.
Screenshot G


  • You can also put filters on the datasets to view a specific log type (for example, filter x and y to get x1 or y1). In our case, we added a filter to see only the Nginx error.log.
Screenshot H


Secure Your ELK Stack via Either X-Pack or ReadonlyREST

Security against hacks and phishing attempts, as well as encrypting data, is a key consideration for any organization running its services on the internet. You can restrict and secure your ELK Stack in two ways: Elasticsearch’s X-Pack and the ReadonlyREST Free Plugin.


X-Pack security module is already included in the recent versions of Elasticsearch and Kibana. But if you are using ELK’s basic license, then you must enable X-Pack in the configuration files of Elasticsearch and Kibana.

    • Since we disabled X-Pack in our Docker Compose file, we need to enable it for Elasticsearch and add it in Kibana. Here is an updated Docker compose file.

  • Stop your existing running setup with this command:
docker-compose -f docker-compose_1.yml down

Now, using the following command, create a new ELK setup with the updated Docker Compose file:


docker-compose -f docker-compose_2.yml up -d
Screenshot I


  • Now that your ELK Stack is up and running, create a user (RoRUser) and password (readonlyrest) for Elasticsearch. Enter the below command in the Elasticsearch home directory:

bin/elasticsearch-users useradd RoRUser -r superuser -p readonlyrest

  • Add the login details we created above (Step 3) to kibana.yml and restart the Kibana container:
# Default Kibana configuration for docker target kibana "0"
elasticsearch.hosts: [ "http://ror_elasticsearch:9200" ]
elasticsearch.username: "RoRUser"
elasticsearch.password: "readonlyrest"
xpack.monitoring.ui.container.elasticsearch.enabled: true
  • Once your Kibana is up and running, open the browser, and hit the URL localhost:5601. Kibana UI will prompt you for login details.
Screenshot J


Screenshot K


ReadonlyREST Free Plugin

ReadonlyREST is the only other security plugin listed on the official website of Elastic, and in this section you’ll learn how to get started with it.

Install the ReadonlyREST Free plugin. In the Select Product drop-down menu, select “Free Elasticsearch plugin,” then select your Elasticsearch version and enter your email address. A download link containing additional installation instructions will be sent to you.

Screenshot L


  • As we are running our ELK Stack on Docker containers, we need to make a copy of the downloaded ZIP file (Step 1 in Screenshot L) in the Elasticsearch container. To do this, run the following command:
docker cp /Users/put/Downloads/ ror_elasticsearch:/tmp/
  • Now, navigate to the Elasticsearch home directory, and install the ReadonlyREST Free plugin (Step 3 in Screenshot L). See Screenshot M (below) for the resulting output.
Screenshot M


  • Now that the ReadonlyREST plugin is installed, create a readonlyrest.yml in /usr/share/elasticsearch/config so that all your ymls are in a single place. Add a basic level of authentication to this yml, as shown below:
  enable: true
  - name: "Basic Authentication to get started with"
    indices: ["*"]
    type: allow
    auth_key: RoRUser:readonlyrest

Note: Make sure you disable the X-Pack security flag in your elasticsearch.yml ( false), as the ReadonlyREST plugin and X-Pack module can’t run simultaneously. We already disabled it in our Docker Compose file.

  • Don’t forget to add the auth_key in the filebeat.yml, as well as in the kibana.yml. Because we set the authentication at the Elasticsearch level, Filebeat needs the login details to ship the logs to Elasticsearch for indexing. Similarly, in the case of Kibana, Filebeat needs the login details to view the indexed logs on the UI. See the below codes for both yml files.


- type: log
  enabled: true
  - /var/log/nginx/*.log
  host: "ror_kibana:5601"
  username: "RoRUser"
  password: "readonlyrest"
  hosts: ["ror_elasticsearch:9200"]

Kibana.yml: kibana "0"
elasticsearch.hosts: [ "http://ror_elasticsearch:9200" ]
elasticsearch.username: "RoRUser"
elasticsearch.password: "readonlyrest"
  • Lastly, restart Elasticsearch and open the Kibana UI on your browser. You will be prompted for authentication.
Screenshot N


Once you enter your username and password, you will be able to log in to the Kibana UI.

Screenshot O


Now, if you want to have a nice Kibana UI like the one in X-pack, then you have to use the RoR Free Kibana plugin.

So, again, visit the ReadonlyREST Free plugin. In the Select Product drop-down menu, select “Free Kibana plugin,” then select your Elasticsearch version (7.5.2 in our case), and enter your email address.

A download link will be sent to you with further installation instructions, as shown below in Screenshot P.

Screenshot P


  • Go to the Kibana home directory inside your ror_kibana container, and install the Kibana plugin. See the corresponding output in Screenshot Q.
Screenshot Q


  • Finally, restart your Kibana service, and open the URL localhost:5601 in your browser. There, you will see a nice Kibana UI asking for authentication (Screenshot R and S).
Screenshot R


Screenshot S


Choosing the Tool That’s Best for You

While both ReadonlyREST’s Free plugin and X-Pack will help you secure your ELK Stack, deciding which one is best for you depends on your specific use cases.

ReadonlyRest is best when:

  • You’re looking for a free security plugin that has features like IP filtering, LDAP, and Active Directory authentication.
  • Your indexes require field-level and document-level security.
  • You’re running an ELK prod setup with multiple clusters of nodes and want a pricing model that won’t change, regardless of how many nodes you’re running.

X-Pack is better when:

  • You want to stick to the software coming from Elastic.
  • You are more comfortable using role-based access control (RBAC)
  • You need to forward logs to another Elastic Stack for auditing purposes

Here’s a quick comparison of the two options.

ReadonlyREST X-Pack
Pricing Model Consistent pricing model, regardless of the number of nodes and any number of clusters Pricing model changes with increase in nodes and clusters
FLS/DLS Free plugin supports field/document-level index security Platinum subscription supports field/document-level security
IP Filtering Free version supports IP filtering Gold subscription is required for IP filtering
AD/LDAP Authentication AD/LDAP authentication are supported in Free plugin AD/LDAP authentication are not included in Free version
Audit Logging Audit logging is supported in Free plugin Gold subscription is required for Audit logging


While there are many ways to set up an ELK Stack, you can do it quickly with the help of Docker Compose. However, once the ELK Stack is up and running, you still need to secure it so that it can be used in production.

If you’re looking for the quickest setup for downloading, deploying, and securing your ELK Stack, check out this demo from ReadonlyREST, which allows you to easily deploy an ELK stack on a single Docker container and get a taste of the ReadonlyREST security plugin–all in just a few minutes. It is the quickest-ever demo to run on a laptop.