Cluster stuck when using a large heap

I have Elasticsearch v 2.2.0 cluster, 1 node, heap size 4g, 7 g RAM, 2 processor cores, 401 index, 1873 saffron, 107,780,287 documents, total data size 70.19 GB .

I also set up index.fielddata.cache.size: 40% .

The problem is that I use Kibana to query some thing (very simple queries), if this is one query, it works fine, but if I continue to query a bit more - the elastic becomes so slow and eventually gets stuck, because JVM heap usage (from Marvel) reaches 87-95%. This also happens when I try to load the Kibana dashboard, and the only solution for this situation is to restart the elastic service or clear the entire cache .

Why is the pile stuck?

EDIT:

_ node / statistics when the heap is stuck

_ node / statistics when the cluster is in a normal state

EDIT 2:

To better understand the problem, I went to the analysis of a memory dump. This analysis was performed after the cluster got stuck in some Kibana queries:

enter image description here

Problem Suspect 1: enter image description here

2: enter image description here

3: enter image description here

_ttl, ( _ttl 4 , ...). , " ttl".

?

+4
1

, , , node .

node, (767 ) : , , :

    "segments": {
      "count": 14228,
      "memory_in_bytes": 804235553,
      "terms_memory_in_bytes": 747176621,
      "stored_fields_memory_in_bytes": 31606496,
      "term_vectors_memory_in_bytes": 0,
      "norms_memory_in_bytes": 694880,
      "doc_values_memory_in_bytes": 24757556,
      "index_writer_memory_in_bytes": 0,
      "index_writer_max_memory_in_bytes": 1381097464,
      "version_map_memory_in_bytes": 39362,
      "fixed_bit_set_memory_in_bytes": 0
    }

ES 2.x, , doc_values ​​ , fielddata (11.8MB):

    "fielddata": {
      "memory_size_in_bytes": 12301920,
      "evictions": 0
    }

( ) :

    "query_cache": {
      "memory_size_in_bytes": 302888,

(fielddata, ) , . , , 2,88 (72%), ( 75% JVM GC). , node .

, :

    "open_file_descriptors": 29461,
    "max_file_descriptors": 65535,

.

+2

All Articles