Today I feel good, and I will not refute all the other posters for saying that itโs even remotely impossible to create a castle with only one Cassandra cluster. I just implemented the Lamportort bakery algorithm and it works just fine. No other strange things are needed, such as zoos, cages, memory tables, etc.
Instead, you can implement a blocking mechanism for multiprocessor / multiplayer computers for a poor person if you can get read and write with minimal QUORUM consistency. This is all you really need to correctly implement this algorithm. (The QUORUM level may vary depending on the type of lock you need: local, rack, full network.)
My implementation will appear in version 0.4.7 of libQtCassandra (in C ++). I already checked and it blocks perfectly. There are a few more things that I want to test, and let you define a set of parameters that are now hard-coded. But the mechanism works fine.
When I found this thread, I thought something was wrong. I searched a little more and found a page on Apache that I mention below. The page is not very advanced, but their MoinMoin does not offer a discussion page ... Anyway, I think it was worth mentioning. We hope that people will begin to implement this locking mechanism in all kinds of languages, such as PHP, Ruby, Java, etc., so that he gets used to it and knows that it works.
Source: http://wiki.apache.org/cassandra/Locking
En http://en.wikipedia.org/wiki/Lamport%27s_bakery_algorithm
The following is more or less the way I implemented my version. This is just a simplified synopsis. I may have to update it a bit more because I made some improvements when testing the resulting code (also the real code uses RAII and includes the option to timeout over TTL.) The final version will be found in the libQtCassandra library .
// lock "object_name" void lock(QString object_name) { QString locks = context->lockTableName(); QString hosts_key = context->lockHostsKey(); QString host_name = context->lockHostName(); int host = table[locks][hosts_key][host_name]; pid_t pid = getpid(); // get the next available ticket table[locks]["entering::" + object_name][host + "/" + pid] = true; int my_ticket(0); QCassandraCells tickets(table[locks]["tickets::" + object_name]); foreach(tickets as t) { // we assume that t.name is the column name // and t.value is its value if(t.value > my_ticket) { my_ticket = t.value; } } ++my_ticket; // add 1, since we want the next ticket table[locks]["tickets::" + object_name][my_ticket + "/" + host + "/" + pid] = 1; // not entering anymore, by deleting the cell we also release the row // once all the processes are done with that object_name table[locks]["entering::" + object_name].dropCell(host + "/" + pid); // here we wait on all the other processes still entering at this // point; if entering more or less at the same time we cannot // guarantee that their ticket number will be larger, it may instead // be equal; however, anyone entering later will always have a larger // ticket number so we won't have to wait for them they will have to wait // on us instead; note that we load the list of "entering" once; // then we just check whether the column still exists; it is enough QCassandraCells entering(table[locks]["entering::" + object_name]); foreach(entering as e) { while(table[locks]["entering::" + object_name].exists(e)) { sleep(); } } // now check whether any other process was there before us, if // so sleep a bit and try again; in our case we only need to check // for the processes registered for that one lock and not all the // processes (which could be 1 million on a large system!); // like with the entering vector we really only need to read the // list of tickets once and then check when they get deleted // (unfortunately we can only do a poll on this one too...); // we exit the foreach() loop once our ticket is proved to be the // smallest or no more tickets needs to be checked; when ticket // numbers are equal, then we use our host numbers, the smaller // is picked; when host numbers are equal (two processes on the // same host fighting for the lock), then we use the processes // pid since these are unique on a system, again the smallest wins. tickets = table[locks]["tickets::" + object_name]; foreach(tickets as t) { // do we have a smaller ticket? // note: the t.host and t.pid come from the column key if(t.value > my_ticket || (t.value == my_ticket && t.host > host) || (t.value == my_ticket && t.host == host && t.pid >= pid)) { // do not wait on larger tickets, just ignore them continue; } // not smaller, wait for the ticket to go away while(table[locks]["tickets::" + object_name].exists(t.name)) { sleep(); } // that ticket was released, we may have priority now // check the next ticket } } // unlock "object_name" void unlock(QString object_name) { // release our ticket QString locks = context->lockTableName(); QString hosts_key = context->lockHostsKey(); QString host_name = context->lockHostName(); int host = table[locks][hosts_key][host_name]; pid_t pid = getpid(); table[locks]["tickets::" + object_name].dropCell(host + "/" + pid); } // sample process using the lock/unlock void SomeProcess(QString object_name) { while(true) { [...] // non-critical section... lock(object_name); // The critical section code goes here... unlock(object_name); // non-critical section... [...] } }
IMPORTANT NOTE (2019/05/05): Although a great exercise was implemented to implement the Lamport bakery using Cassandra, it is an anti-pattern for the Cassandra database. This means that it may not work well under heavy load. Since then, I created a new locking system , still using the Lamport algorithm, but keeping all the data in memory (it is very small) and still allowing several computers to participate in the locking, so if one of them crashes, the locking system will continue work as expected (many other locking systems do not have such an opportunity. When the wizard shuts down, you lose the ability to lock until another computer decides to become a new wizard itself ...)