Cassandra Reading Performance with Collection

I have the following columnfamily defined in cassandra

CREATE TABLE metric (
period int,
rollup int,
tenant text,
path text,
time bigint,
data list<double>,
PRIMARY KEY ((tenant, period, rollup, path), time)
) WITH
bloom_filter_fp_chance=0.010000 AND
caching='KEYS_ONLY' AND
comment='' AND
dclocal_read_repair_chance=0.000000 AND
gc_grace_seconds=864000 AND
index_interval=128 AND
read_repair_chance=0.100000 AND
replicate_on_write='true' AND
populate_io_cache_on_flush='false' AND
default_time_to_live=0 AND
speculative_retry='NONE' AND
memtable_flush_period_in_ms=0 AND
compaction={'class': 'SizeTieredCompactionStrategy'} AND
compression={'sstable_compression': 'LZ4Compressor'};

Does the size of the data list determine read performance in cassandra? If so, how can we measure it ..?

The problem is that the time it takes to query Data-Set1 from cassandra to get 8640 rows (where # elements in the data list for each row is 90) for this path / period / convolution combination is longer than the time it takes for the Data- request Set 2, which is 8640 rows of data (where the number of elements in the data list for each row is 10).

, 10 , Data-Set1, - cassandra , , , Data-Set2.

, , .

cassandra....?

+4
1

, 90 , , , . , , Cassandra . (). , , 90 .

, . , Cassandra, .

aploetz@cqlsh:stackoverflow> tracing on;

.

- JVM? RAM node? GC, , (), JVM. DataStax Java , node :

System Memory       Heap Size

Less than 2GB       1/2 of system memory
2GB to 4GB          1GB
Greater than 4GB    1/4 system memory, but not more than 8GB
+1

All Articles