You may be able to get what you need using the secondary index. Using the classic RDB example, a customer order example: you have one table for customers and one for orders. In the Orders table there is a Key consisting of Customer - HASH, Order - RANGE. Therefore, if you want to receive the last 10 orders, there would be no way to do this without scanning
But if you create a global secondary index for the “Some Constant” orders - HASH, Date RANGE and request against this index, they will do what you want and pay only for the RCUs associated with the returned records. No expensive scan is required. Note that the recordings will be more expensive, but in most cases they are much more than they write.
Now you have the original problem, if you want to get the 10 largest orders per day in excess of $ 1,000. The request will return the last 10 orders, and then filter out those that are less than $ 1,000.
In this case, you can create a computed Date-OrderAmount key, and queries on this index will return what you want.
It is not as simple as SQL, but you also need to think about access patterns in SQL. If, if you have a lot of data, you need to create indexes in SQL, or the database will be happy to scan tables on your behalf, which will degrade performance and increase your costs.
Note that everything that I propose is normalized in the sense that there is only one source of truth. You are not duplicating the data - you are just looking at it to get what you need from DynamoDB.
Keep in mind that CONSTANT is like a HASH with 10 GB per partition limit, so you will need to design around it if you had a lot of active data. For example, depending on your expected access pattern, you might use Customer rather than a constant like HASH. Or use STreams to organize data (or subsets) in other ways.