On this website http://nosql-database.org you can find a list of many NoSQL databases sorted by type of data warehouse, you should check there document stores.
I do not name any particular database to avoid an opinion-based biased response, but if you are interested in a data warehouse that scales like Cassandra, you probably want to check out those that use master-master / multi -master / masterless ( you call it, the idea is the same) architecture, where records and reads can be shared between all nodes in the cluster.
I know that Cassandra is optimized in the direction of writing, not reading, but without further details in the question can not clarify the answer with additional information.
Update:
Disclaimer I did not use CouchDB at all and did not test its performance.
Since you noticed CouchDB, I will add what I found in the official documentation, in the distributed database and replication section.
CouchDB is a peer-to-peer distributed database system. This allows users and servers to access and update the same shared data while disconnected. These changes can then be reproduced in two directions later.
CouchDB document storage, viewing, and security models are designed to work together to provide efficient bi-directional replication and reliable. Both documents and schemas can be copied, which allows database applications (including application development, logic and data) to be replicated to laptops for offline use or replicated to servers in remote offices where slow or unreliable data connections are complex.
The replication process is incremental. At the database level, replication only considers documents that have been updated since the last replication. Then, for each updated document, only the fields and blobs that have been changed are replicated over the network. If replication fails at any stage due to network problems or failures, for example, the next replication restarts in the same document where it was stopped.
Partial replicas can be created and maintained. Replication can be filtered by the javascript function, so that only specific documents or those that match specific criteria are replicated. This may allow users to not be able to use subsets of a large shared database application offline for their own use, while maintaining normal interaction with the application and this subset of data.
Which looks pretty scalable to me, since it seems like you can add new nodes to the cluster and then all the data will be replicated.
Partial replicas also seem to be an interesting option for really large data sets, which I would very carefully configure to prevent situations when this query to the database may not produce acceptable results, for example, in the case of a network partition and having only access to a partial set.