Are you creating a database engine?
Edit: I built a disk-based database system back in the mid-90s.
Fixed-size recordings are easiest to work with, since the file offset to determine the recording location can easily be calculated as a multiple of the recording size. I also had some variable-sized records.
My system needs to be optimized for reading. The data was actually saved on a CD-ROM, so it was read-only. I created search tree binaries for each column that I wanted to find. I took the open source binary search tree and converted it to random access to the disk file. Sorted reads from each index file were easy, and reading each data record from the main data file in accordance with the indexed order was also easy. I did not need to sort in memory, and the system was faster than any of the available RDBMS systems that were running on the client machine at that time.
For fixed record size data, an index can simply keep track of the record number. For variable-length data records, the index simply needs to store the offset in the file where the record begins, and each record must begin with a structure that defines the length.
source share