An Interview with CERN

CERN is a ReadonlyREST power user since Autumn 2016 - Ulrich Schwickerath talks about why and how of their choice

Hi Ulrich! Who are you and what's your role at CERN?

My name is Ulrich Schwickerath. Although being an experimental physicist who did his PhD in Higgs boson searches at the LEP collider, I'm actually now working in the CERN IT department, being little involved in physics data analysis these days. One of my various tasks is to provide an easy-to-use and managed Elasticsearch service to interested user communities at CERN.

What does your department do and for whom?

The CERN IT department provides IT infrastructure services for all CERN experiments and users. This includes the accelerators (like the Large Hadron Collider, or LHC), LHC experiments (like ALICE, ATLAS, CMS or LHCb),  and non-LHC experiments (like ALPHA, COMPASS or ISOLDE).

Ulrich Schwickerath at HEPiX Spring Meeting, Budapest 2017

When did you start using ReadonlyREST?

We started evaluating ReadlonlyREST in autumn 2016. At that point we had an initial prototype of the new service in place, and were looking for a light-weight solution to provide read-only access for kibana, with the potential to be extended to provide general index level security. This was a requirement from some of our users, and a pre-requisite to manage the storm of different use cases and requirements we received when we started the project.

What are your top three reasons why you chose ReadonlyREST?

ReadonlyREST is easy to configure, open source and fulfils all our requirements.

— All our Elasticsearch clusters are using ReadonlyREST. - Ulrich Schwickerath (CERN)

Give us some numbers! How much data flows through ReadonlyREST every day at CERN?

Nowadays, we use ReadonlyREST to provide index level security on shared Elasticsearch clusters. We have many small use cases which this way can share the same hardware resources. Our largest shared cluster at this time (Aug. 2017) consolidates about 12 different use cases on the same resources at low cost, and we see http access rates of up to 90Hz on all clusters.

What is the origin of this data?

This depends on the use case and the users. A large fraction comes from log files, like service or systemlogs. On top of that, another source is monitoring data, like service status or usage.

— Our largest shared cluster at this time (Nov 2017) consolidates about 17 different use cases on the same hardware, lowering the total cost

Can you give us a list of the departments/experiments whose data flows through ReadonlyREST?

All our Elasticsearch clusters are using ReadonlyREST. Customers include the four biggest LHC experiments, the IT department and the accelerators and technology sector.

What do you expect from the future of the ReadonlyREST project?

ReadonlyREST provides the functionality we need. We're very happy with the support given by the community, and the response time on our feedback and inputs. ReadonlyREST has become a critical component of our centralised Elasticsearch installation and thus we rely on a continued good collaboration with the ReadonlyREST community.

— ReadonlyREST has become a critical component of our centralised Elasticsearch installation.

Related content

How CERN saves money with ReadonlyREST

This year, CERN (The European Organization for Nuclear Research) optimised the usage of computing resources by consolidating 30+ Elasticsearch clusters into a handful of multi-user clusters.

Watch the presentation CERN organised, to understand the guiding principles behind ReadonlyREST.