ReadonlyREST is a light weight Elasticsearch plugin that adds encryption, authentication, authorization and access control capabilities to Elasticsearch embedded REST API.
The core of this plugin is an ACL engine that checks each incoming request through a sequence of rules a bit like a firewall.
There is a dozen rules that can be grouped in sequences of blocks and form a powerful representation of a logic chain.

blocks of rules and rules form a declarative access control list capable of powerful logic chains.


    - name: "Block 1 - Allowing anything from localhost"
      hosts: []

    - name: "Block 2 - Other hosts can only read certain indices"
      actions: ["indices:data/read/*"]
      indices: ["logstash-*"] # aliases are taken in account!

An Example of Access Control List (ACL) made of 2 blocks.


An SSL encrypted connection is a prerequisite for secure exchange of credentials and data over the network.
ReadonlyREST can be configured to require that all REST requests come through HTTPS.

Letsencrypt certificates work just fine, once you make create a JKS keystore .

# Uncomment if you also installed xpack plugin
#xpack.security.enabled: false

http.type: ssl_netty4

      keystore_file: "/elasticsearch/plugins/readonlyrest/keystore.jks"
      keystore_pass: readonlyrest
      key_pass: readonlyrest

Blocks of rules

Every block must have at least the name and type fields.

  • name will appear in logs, so keep it short and distinctive.
  • type can be either allow or forbid .
    - name: "Block 1 - Allowing anything from localhost"
      type: allow
      # In real life now you should increase the specificity by adding rules here (otherwise this block will allow all requests!)

Example: the most simple allow block.

IMPORTANT : If no blocks are configured, ReadonlyREST rejects all requests.


ReadonlyREST access control rules allow to take decisions on three levels:

  • Network level
  • HTTP level
  • Elasticsearch level

Please refrain from using HTTP level rules to protect certain indices or limit what people can do to an index.
The level of control at this level is really coarse, especially because Elasticsearch REST API does not always respect RESTful principles.
This makes of HTTP a bad abstraction level to write ACLs in Elasticsearch all together.

The only clean and exhaustive way to implement access control is to reason about requests AFTER ElasticSearch has parsed them.
Only then, the list of affected indices and the action will be known for sure. See Elasticsearch level rules.

Transport level

These are the most basic rules. It is possible to allow/forbid requests originating from a
list of IP addresses, host names or IP networks (in slash notation).

Rule and example argumentDescription
hosts: [localhost,]a list of origin IP addresses or subnets

HTTP Level

Rule and example argumentDescription
accept_x-forwarded-for_header: false⚠️DEPRECATED modifier for hosts rule: if the origin IP won't match, check the X-Forwarded-For header
x_forwarded_for: [""]exactly like hosts , but looks inside the X-Forwarded-For header only (useful when requests come through a load balancer like AWS ELB)
methods: [GET, DELETE]match the HTTP method
uri_re: ^/secret-index/.*☠️HACKY A regular expression to match the request URI. Hint: superseded by indices!
maxBodyLength: 0⚠️DEPRECATED identify a maximum length for HTTP request body.

The x_forwarded_for rule is the only HTTP level rule that is still recommendeable to use in production. This is because when Elasticsearch is behind a load balancer or reverse proxy, it gives an equivalent of the hosts rule (restrict access by origin IP or network).

Match requests from LB

This is a nice tip if your Elasticsearch is behind a load balancer. If you want to match all the requests that come through the load balancer, use x_forwarded_for: [""] .
This will match the requests with a valid IP address as a value of the X-Forwarded-For header.

Elasticsearch level

Rule and example argumentDescription
indices: ["sales", "logstash-*"]Match if the request involves indices whose name is "sales", starts with "logstash-" or of both.
actions: ["indices:data/read/*"]Match if the request action starts with "indices:data/read/"
kibana_access: roEnables the minimum set of actions necessary for browsers to use Kibana. If configured as ro the browser has a read-only view on Kibana dashboards and settings and all other indices. If configured as rw , some more actions will be allowed towards the .kibana index only, so Kibana dashboards and settings can be modified. This rule is often used with the indices rule, to limit the data a user is able to see represented on the dashboards.
kibana_index: .kibana-user1OPTIONAL: Defaults to .kibana specify to what index we expect Kibana to attempt to read/write its settings (use this together with kibana.index setting in kibana.yml.)
indices_rewrite: ["^logstash.*", "$1_@{user}"](experimental) If one or more indices in the request match the regex in the first element of the array, replace each matching index name with the given replacement. The @{user} string can be used as a placeholder to whatever appears as the user part in the request's credentials. Likewise, the $1 is a placeholder to whatever string the matched index name was. I.e.: if I logged in as sales and I requested for logstash-2017.05.02 index, I'll be given logstash-2017.05.02_sales

Indices rule

If a read request asks for a some indices they have permissions for and some indices that they do NOT have permission for, the request is rewritten to involve only the subset of indices they have permission for.
This is behaviour is very useful in Kibana: different users can see the same dashboards with data from only their own indices.|

If a write request wants to write to indices they don't have permission for, the write request is rejected.

Action rule

Each request carries only one action. Here is a complete list of valid action strings as of Elasticsearch 5.1.2.





















Local ReadonlyREST users are authenticated via HTTP Basic Auth. This authentication method is secure only if SSL is enabled.

Rule and example argumentDescription
auth_key: sales:p455wdAccepts HTTP Basic Auth . Configure this value in clear text . Clients will need to provide the header e.g. Authorization: Basic c2FsZXM6cDQ1NXdk where "c2FsZXM6cDQ1NXdk" is Base64 for "sales:p455wd".
auth_key_sha1: 6cef1...9145a0⚠️DEPRECATED Accepts HTTP Basic Auth . The value is a string like username:password hashed in SHA1 Clients will need to provide the usual Authorization header.
auth_key_sha256: 280ac6f...94bf9Accepts HTTP Basic Auth . The value is a string like username:password hashed in SHA256 . Clients will need to provide the usual Authorization header.
proxy_auth: "*"Delegated auth. Trust that a reverse proxy has taken care of authenticating the request and has written the resolved user name into the X-Forwarded-User header. The value "*" in example, will let this rule match any value of X-Forwarded-User header. If you are using this technique for authentication in Kibana , don't forget to add this snippet to conf/kibana.yml : elasticsearch.requestHeadersWhitelist: ['authorization', 'x-forwarded-user'] so Kibana forwards the right header to Elasticsearch.
api_keys: [123456, abcdefg]a list of api keys expected in the header X-Api-Key
groups: ["group1", "group2"]Limit access to members of specific user groups. See User management
session_max_idle: 1hbrowser session timeout (via cookie). Example values 1w (one week), 10s (10 seconds), 7d (7 days), etc. NB: not available for Elasticsearch 2.x
ldap_authsee below ldap section

Ancillary rules

Rule and example argumentDescription
verbosity: errorDon't spam elasticsearch log file with successful requests (that match this block). Defaults to info

Audit & Troubleshooting

If in doubt on why some requests get forbidden, set ElasticSearch debug level to DEBUG and grep the logs for ACT: .
This will show you the whole request context (including the action and indices fields) of the blocked requests. You can now tweak your ACL blocks to include that action.

Logs are good for auditing the activity on the REST API. You can use $ES_HOME/config/logging.yml (Elasticsearch 2.x) or $ES_HOME/config/l4j2.properties file (Elasticsearch 5.x)

Here is a l4j2.properties snippet for ES 5.x that logs all the received requests as a new line in a separate file:

#Plugin readonly rest separate access logging file definition
appender.access_log_rolling.type = RollingFile
appender.access_log_rolling.name = access_log_rolling
appender.access_log_rolling.fileName = ${sys:es.logs}_access.log
appender.access_log_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%.-10000m%n
appender.access_log_rolling.layout.type = PatternLayout
appender.access_log_rolling.filePattern = ${sys:es.logs}_access-%d{yyyy-MM-dd}.log
appender.access_log_rolling.policies.type = Policies
appender.access_log_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.access_log_rolling.policies.time.interval = 1
appender.access_log_rolling.policies.time.modulate = true

logger.access_log_rolling.name = org.elasticsearch.plugin.readonlyrest.acl
logger.access_log_rolling.level = info
logger.access_log_rolling.appenderRef.access_log_rolling.ref = access_log_rolling
logger.access_log_rolling.additivity = false

# exclude kibana, beat and logstash users as they generate too much noise
logger.access_log_rolling.filter.regex.type = RegexFilter
logger.access_log_rolling.filter.regex.regex = .*USR:(kibana|beat|logstash),.*
logger.access_log_rolling.filter.regex.onMatch = DENY
logger.access_log_rolling.filter.regex.onMisMatch = ACCEPT

Users and Groups

Sometimes we want to make allow/forbid decisions according to the username associated to a HTTP request. The extraction of the user identity (username) can be done via HTTP Basic Auth (Authorization header) or
delegated to a reverse proxy (see proxy_auth rule).

The validation of the said credentials can be carried on locally with hard coded credential hashes (see auth_key_sha256 rule),
via one or more LDAP server, or we can forward the Authorization header to an external web server and examine the HTTP status code (see external_authentication ).

Optionally we can introduce the notion of groups (see them as bags of users).
The aim of having groups is to write a very specific block once, and being able to allow multiple usernames that satisfy the block.

Groups can be declared and associated to users statically in the elasticsearch.yml file.
Alternatively, groups for a given username can be retrieved from an LDAP server or from a LDAP server, or a custom JSON/XML service.

You can mix and match the techniques to satisfy your requirements. For example, you can configure ReadonlyREST to:

  • Extract the username from X-Forwarded-User
  • Resolve groups associated to said user through a JSON microservice

Another example:

  • Extract the username from Authorization header (HTTP Basic Auth)
  • Validate said username's password via LDAP server
  • resolve groups associated to the user from groups defined in elasticsearch.yml

More examples are shown below together with a sample configuration.

Local users and groups

The groups rule accepts a list of group names.
This rule will match if the resolved username (i.e. via auth_key ) is associated to the given groups.
In this example, the usernames are statically associated to group names.


    - name: Accept requests from users in group team1 on index1
      type: allow  # Optional, defaults to "allow" will omit now on.
      groups: ["team1"]
      indices: ["index1"]

    - name: Accept requests from users in group team2 on index2
      groups: ["team2"]
      indices: ["index2"]

    - name: Accept requests from users in groups team1 or team2 on index3
      groups: ["team1", "team2"]
      indices: ["inddex3"]


    - username: alice
      auth_key: alice:p455phrase
      groups: ["team1"]

    - username: bob
      auth_key: bob:s3cr37
      groups: ["team2", "team4"]

    - username: claire
      auth_key_sha256: e0bba5fda92dbb0570fd2e729a3c8ed6b1d52b380581f32427a38e396ba28ec6 #claire:p455key
      groups: ["team1", "team5"]

Example: rules are associated to groups (instead of users) and users-group association is declared separately later under users:

Dynamic variables in rules

One of the neatest feature in ReadonlyREST is that you can use dynamic variables inside most rules values.
The variables you can currently replace into rules values are these:

  • @{user} gets replaced with the username of the successfully authenticated user
  • @{xyz} gets replaced with any xyz HTTP header included in the incoming request (useful when reverse proxies handle authentication)

Indices from user name

You can let users authenticate externally, i.e. via LDAP, and use their user name string inside the indices rule.


    - name: "Users can see only their logstash indices i.e. alice can see alice_logstash-20170922"
        name: "myLDAP" 
      indices: ["@{user}_logstash-*"]
    # LDAP connector settings omitted, see LDAP section below..

Kibana index from headers

Imagine that we delegate authentication to a reverse proxy, so we know that only authenticated users will ever reach Elasticsearch.
We can tell the reverse proxy (i.e. Nginx) to inject a header called x-nginx-user containing the username.


    - name: "Identify a personal kibana index where each user is supposed to save their dashboards"
      kibana_access: rw
      kibana_index: ".kibana_@{x-nginx-user}" 

LDAP connector

In this example, users credentials are validate via LDAP. The groups associated to each validated users are resolved using the same LDAP server.

Simpler: authentication and authorization in one rule


    - name: Accept requests from users in group team1 on index1
      type: allow                                           # Optional, defaults to "allow", will omit from now on.
        name: "ldap1"                                       # ldap name from below 'ldaps' section
        groups: ["g1", "g2"]                                # group within 'ou=Groups,dc=example,dc=com'
      indices: ["index1"]
    - name: Accept requests from users in group team2 on index2
        name: "ldap2"
        groups: ["g3"]
        cache_ttl_in_sec: 60
      indices: ["index2"]

    - name: ldap1
      host: "ldap1.example.com"
      port: 389                                                 # optional, default 389
      ssl_enabled: false                                        # optional, default true
      ssl_trust_all_certs: true                                 # optional, default false
      bind_dn: "cn=admin,dc=example,dc=com"                     # optional, skip for anonymous bind
      bind_password: "password"                                 # optional, skip for anonymous bind
      search_user_base_DN: "ou=People,dc=example,dc=com"
      user_id_attribute: "uid"                                  # optional, default "uid"
      search_groups_base_DN: "ou=Groups,dc=example,dc=com"
      unique_member_attribute: "uniqueMember"                   # optional, default "uniqueMember"
      connection_pool_size: 10                                  # optional, default 30
      connection_timeout_in_sec: 10                             # optional, default 1
      request_timeout_in_sec: 10                                # optional, default 1
      cache_ttl_in_sec: 60                                      # optional, default 0 - cache disabled
    - name: ldap2
      host: "ldap2.example2.com"
      port: 636
      search_user_base_DN: "ou=People,dc=example2,dc=com"
      search_groups_base_DN: "ou=Groups,dc=example2,dc=com"

Advanced: authentication and authorization in separate rules

    enable: true
    response_if_req_forbidden: Forbidden by ReadonlyREST ES plugin

    - name: Accept requests to index1 from users with valid LDAP credentials, belonging to LDAP group 'team1' 
      ldap_authentication: "ldap1"  
        name: "ldap1"                                       # ldap name from 'ldaps' section
        groups: ["g1", "g2"]                                # group within 'ou=Groups,dc=example,dc=com'
      indices: ["index1"]
    - name: Accept requests to index2 from users with valid LDAP credentials, belonging to LDAP group 'team2'
        name: "ldap2"  
        cache_ttl_in_sec: 60
        name: "ldap2"
        groups: ["g3"]
        cache_ttl_in_sec: 60
      indices: ["index2"]

    - name: ldap1
      host: "ldap1.example.com"
      port: 389                                                 # default 389
      ssl_enabled: false                                        # default true
      ssl_trust_all_certs: true                                 # default false
      bind_dn: "cn=admin,dc=example,dc=com"                     # skip for anonymous bind
      bind_password: "password"                                 # skip for anonymous bind
      search_user_base_DN: "ou=People,dc=example,dc=com"
      user_id_attribute: "uid"                                  # default "uid"
      search_groups_base_DN: "ou=Groups,dc=example,dc=com"
      unique_member_attribute: "uniqueMember"                   # default "uniqueMember"
      connection_pool_size: 10                                  # default 30
      connection_timeout_in_sec: 10                             # default 1
      request_timeout_in_sec: 10                                # default 1
      cache_ttl_in_sec: 60                                      # default 0 - cache disabled
    - name: ldap2
      host: "ldap2.example2.com"
      port: 636
      search_user_base_DN: "ou=People,dc=example2,dc=com"
      search_groups_base_DN: "ou=Groups,dc=example2,dc=com"

LDAP configuration requirements:

  • search_user_base_DN should refer to the base Distinguished Name of the users to be authenticated.
  • search_groups_base_DN should refer to the base Distinguished Name of the groups to which these users may belong.
  • By default, users in search_user_base_DN should contain a uid LDAP attribute referring to a unique ID for the user within the base DN. An alternative attribute name can be specified via the optional user_id_attribute configuration item.
  • By default, groups in search_groups_base_DN should contain a uniqueMember LDAP attribute referring to the full DNs of the users that belong to the group. (There may be any number of occurrences of this attribute within a particular group, as any number of users may belong to the group.) An alternative attribute name can be specified via the optional unique_member_atttribute configuration item.

(An example OpenLDAP configuration file can be found in our tests: /src/test/resources/test_example.ldif)

Caching can be configured per LDAP client (see ldap1 ) or per rule (see Accept requests from users in group team2 on index2 rule)

External Basic Auth

ReadonlyREST will forward the received Authorization header to a website of choice and evaluate the returned HTTP status code to verify the provided credentials.
This is useful if you already have a web server with all the credentials configured and the credentials are passed over the Authorization header.

    - name: "::Tweets::"
      methods: GET
      indices: ["twitter"]
      external_authentication: "ext1"

    - name: "::Facebook posts::"
      methods: GET
      indices: ["facebook"]
        service: "ext2"
        cache_ttl_in_sec: 60


    - name: "ext1"
      authentication_endpoint: "http://external-website1:8080/auth1"
      success_status_code: 200
      cache_ttl_in_sec: 60

    - name: "ext2"
      authentication_endpoint: "http://external-website2:8080/auth2"
      success_status_code: 204
      cache_ttl_in_sec: 60

To define an external authentication service the user should specify:

  • name for service (then this name is used as id in service attribute of external_authentication rule)
  • authentication_endpoint (GET request)
  • success_status_code - authentication response success status code

Cache can be defined at the service level or/and at the rule level. In the example, both are shown, but you might opt for setting up either.

Custom groups providers

This external authorization connector makes it possible to resolve to what groups a users belong, using an external JSON or XML service.


    - name: "::Tweets::"
      methods: GET
      indices: ["twitter"]
        proxy_auth_config: "proxy1"
        users: ["*"]
        user_groups_provider: "GroupsService"
        groups: ["group3"]

    - name: "::Facebook posts::"
      methods: GET
      indices: ["facebook"]
        proxy_auth_config: "proxy1"
        users: ["*"]
        user_groups_provider: "GroupsService"
        groups: ["group1"]
        cache_ttl_in_sec: 60


    - name: "proxy1"
      user_id_header: "X-Auth-Token"                           # default X-Forwarded-User


    - name: GroupsService
      groups_endpoint: "http://localhost:8080/groups"
      auth_token_name: "token"
      auth_token_passed_as: QUERY_PARAM                        # HEADER OR QUERY_PARAM
      response_groups_json_path: "$..groups[?(@.name)].name"   # see: https://github.com/json-path/JsonPath
      cache_ttl_in_sec: 60

In example above, a user is authenticated by reverse proxy and then external service is asked for groups for that user.
If groups returned by the service contain any group declared in groups list, user is authorized and rule matches.

To define user groups provider you should specify:

  • name for service (then this name is used as id in user_groups_provider attribute of groups_provider_authorization rule)
  • groups_endpoint - service with groups endpoint (GET request)
  • auth_token_name - user identifier will be passed with this name
  • auth_token_passed_as - user identifier can be send using HEADER or QUERY_PARAM
  • response_groups_json_path - response can be unrestricted, but you have to specify JSON Path for groups name list (see example in tests)

As usual, the cache behaviour can be defined at service level or/and at rule level.

GPLv3 License

ReadonlyREST Free (Elasticsearch plugin) is released under the GPLv3 license. For what this kind of software concerns, this is identical to GPLv2, that is, you can treat ReadonlyREST as you would treat Linux code.
The big difference from Linux is that here you can ask for a commercial license and stop thinking about legal implications.

Here is a practical summary of what dealing with GPLv3 means:


  • Sell or give away an unmodified version of this code (along with its license and attributions).
  • Use a modified version internally to your company without making your changes available under the GPLv3 license.
  • Distribute for free or commercially a modified version (partial or total) of this software, provided that the source is contributed back as pull request to
    the original project or publicly made available under the GPLv3 or compatible license.


  • Sell or give away a modified version of the plugin (or parts of it, or any derived work) without publishing the modified source under GPLv3 compatible licenses.
  • Modify the code for a paying client without immediately contributing your changes back to this project's GitHub as a pull request, or alternatively publicly release said fork under GPLv3 or compatible license.

GPLv3 license FAQ

1. Q : I sell a proprietary software solution that already includes many other OSS components (i.e. Elasticsearch). Can I bundle also ReadonlyREST into it?

A : No, GPLv3 does not allow it. But hey, no problem, just go for the Enterprise subscription .

2. Q : I have a SaaS and we want to use a version of ReadonlyREST (as is, or modified), do I need a commercial license?

A : No, you don't. Go for it! However if you are using Kibana, I'd have a look at the PRO offer

3. Q : I'm a consultant and I will charge my customer for modifying this software and they will not sell it as a product or part of their product.

A : This is fine with GPLv3.

Please don't hesitate to contact me for a re-licensed copy of this source. Your success is what makes this project worthwhile,
qdon't make legal issues slow you down.

Double-licensing (Commercial)

See commercial license FAQ page for more information.


  1. Download the binary release of the latest version of ReadonlyREST from the download page
  2. cd to the Elasticsearch home
  3. Install the plugin

Elasticsearch 5.x

 bin/elasticsearch-plugin install file:///download-folder/readonlyrest-1.13.2_es5.1.2.zip

Elasticsearch 2.x

bin/plugin install file:///download-folder/readonlyrest-1.13.2_es5.1.2.zip
  1. Edit config/elasticsearch.yml and add your configuration as seen in examples.

Build from Source

You need to have installed: git, maven, Java 8 JDK, zip. So use apt-get or brew to get them.

  1. Clone the repo
git clone https://github.com/sscarduzio/elasticsearch-readonlyrest-plugin
  1. cd elasticsearch-readonlyrest-plugin
  2. Launch the build script bin/build.sh

You should find the plugin's zip files under /target (Elasticsearch 2.x) or build/releases/ (Elasticsearch 5.x).


A small library of typical use cases.

Secure Logstash

We have a Logstash agent installed somewhere and we want to ship the logs to our Elasticsearch cluster securely.

Elasticsearch side

Step 1
Obtain a private key and a certificate chain and generate a java keystore in JKS format. You can either use real certs or - if we're testing locally - we can create a self signed certificate.

For convenience, we'll do with a self-signed certificate. But if you deploy this to a server, use a real one!

keytool -genkey -keyalg RSA -alias selfsigned -keystore keystore.jks -storepass readonlyrest -validity 360 -keysize 2048

Step 2
Now copy the keystore.jks inside the plugin directory inside the Elasticsearch home.

cp keystore.jks /elasticsearch/plugins/readonlyrest/

Step 3
Now We need to create some credentials for logstash to login, let's say

  • user = logstash
  • password = logstash

Step 4
Hash the credentials string logstash:logstash using SHA256.
The simplest way is to paste the string in an online tool
You should have obtained "4338fa3ea95532196849ae27615e14dda95c77b1".

Step 5
Let's add some configuration to our Elasticsearch: edit conf/elasticsearch.yml and append the following lines:

      enable: true
      keystore_file: "/elasticsearch/plugins/readonlyrest/keystore.jks"
      keystore_pass: readonlyrest
      key_pass: readonlyrest

    response_if_req_forbidden: Forbidden by ReadonlyREST ES plugin


    - name: "::LOGSTASH::"
      auth_key_sha256: "280ac6f756a64a80143447c980289e7e4c6918b92588c8095c7c3f049a13fbf9" #logstash:logstash
      actions: ["cluster:monitor/main","indices:admin/types/exists","indices:data/read/*","indices:data/write/*","indices:admin/template/*","indices:admin/create"]
      indices: ["logstash-*"] 

Logstash side

Edit the logstash configuration file and fix the output block as follows:

output {
  elasticsearch {
    ssl => true
    ssl_certificate_verification => false
    hosts => ["YOUR_ELASTICSEARCH_HOST:9200"]
    user => logstash
    password => logstash

The ssl_certificate_verification bit is necessary for accepting self-signed SSL certificates.

Secure Metricbeats

Very similarly to Logstaash, here's a snippet of configuration for Metricbeats logging agent
configuration of metricbeat - elasticsearch section

On the Metricbeats side

  username: metricbeat
  password: hereyourpasswordformetricbeat
  protocol: https
  hosts: ["xx.xx.xx.xx:9x00"]
  worker: 1
  index: "log_metricbeat-%{+yyyy.MM}"
  template.enabled: false
  template.versions.2x.enabled: false
  ssl.enabled: true
  ssl.certificate_authorities: ["./certs/your-rootca_cert.pem"]
  ssl.certificate: "./certs/your_srv_cert.pem"
  ssl.key: "./certs/your_srv_key.pem"

Of course, if you do not use ssl, disable it.

On the Elasticsearch side

       enable: true
       keystore_file: "/elasticsearch/plugins/readonlyrest/keystore.jks"
       keystore_pass: readonlyrest
       key_pass: readonlyrest

    - name: "metricbeat can write and create its own indices"
      auth_key_sha1: fd2e44724a234234454324253094080986e8fda
      actions: ["indices:data/read/*","indices:data/write/*","indices:admin/template/*","indices:admin/create"]
      indices: ["metricbeat-*",  "log_metricbeat*"]