Story:
I am new to Cassandra and still trying to ponder the inner workings.
I am thinking of using Cassandra in an application that will have only a limited number of nodes (less than 10, most often 3). Ideally, each node in my cluster would have a full copy of all the application data. So, I am considering setting the replication coefficient to cluster size. When additional nodes are added, I would change the key space to increase the replication coefficient setting (repair nodetool to ensure that the necessary data is received).
I would use NetworkTopologyStrategy for replication to take advantage of data center knowledge.
In this situation, how does separation work? I read about a combination of nodes and key sections forming a ring in Kassandra. If all my nodes are "responsible" for each piece of data, regardless of the value of the hash function calculated by the partition, do I just have a ring of one section key?
Are there huge falls for this type of Cassandra deployment? I suppose there will be a lot of asynchronous replication in the background, as the data propagates to the entire node, but this is one of the design goals, so I'm fine with it.
The level of consistency in reading is likely to be "one" or "local".
The level of consistency during recording will usually be "two."
Actual questions to answer:
- == ( ) , ?
- , , , ?
- node "" ?
- "", node, ?
- , ?