Okay, so this is a bit from the other answers, but ... it seems to me that if you have data in the file system (maybe one reserve per file) with a fixed record size, you can get on the data really easily: given the request for specific stock and time range, you can find the right place, get all the necessary data (you know exactly how many bytes), convert the data to the desired format (which can be very fast depending on your storage format), and you are absent.
I don’t know anything about the Amazon repository, but if you don’t have anything like direct file access, you may have simple drops - you will need to balance the large drops (fewer records, but maybe more data than you need each time ) with small blobs (more entries give more overhead and probably more requests for them, but less useless data is returned every time).
Then you add caching - I would suggest giving other servers different reserves for processing, for example - and you can pretty much just use memory. If you can afford enough memory on a sufficient number of servers, bypass the download-on-demand part and just upload all the files at startup. This would make things easier by running slower (which obviously affects switching to another resource, unless you can afford to have two servers for any particular stock, which would be useful).
Please note that you do not need to save the stock symbol, date or minute for each record, because they are implicit in the download file and position in the file. You should also consider what accuracy you need for each value, and how efficiently to store it - you gave 6SF in your question, which you could store in 20 bits. Potentially store three 20-bit integers in 64-bit stores: read it as long (or whatever your 64-bit integer value is) and use masking / shift to return it to three integers. You will need to know what weight to use, of course, that you could probably encode in 4 spare bits if you cannot make it permanent.
You did not say what the other three integer columns are, but if you can get away with 64 bits for these three, you can save the whole record in 16 bytes. This is only ~ 110 GB for the entire database, which is actually not very ...
EDIT: Another thing to keep in mind is that stocks do not seem to change on weekends or even overnight. If the stock market is open only 8 hours a day, 5 days a week, then you only need 40 values per week instead of 168. At this point, you can only get about 28 GB of data in your files ... which sounds much less than you probably originally thought. Having a lot of data in memory is very reasonable.
EDIT: I think I missed an explanation of why this approach fits here: you have a very predictable aspect for a significant part of your data - stock ticker, date and time. Selecting a ticker once (as a file name) and leaving the date / time completely implicit in the data position, you delete a whole bunch of work. This is a bit like the difference between String[] and a Map<Integer, String> - knowing that your array index always starts at 0 and increments in increments of 1 to the length of the array, which provides faster access and more efficient storage.