If you can set string limits, as Cornell says, that really matters. ansistring has some internal overhead as well as additional overhead. However, a shortstring is always allocated, even if not used.
If you are really tightly exchanging memory, making your own allocation for strings more reasonable, especially if the data is relatively immutable. Then just select the large block and put all the lines there with a 16-bit prefix or so.
Less low-level tricks, such as just deduplicating (some of) strings, also save a lot of storage space.
Please note that discussion of the vs class of the Rob class occurs only if you manage to statically create an instance of the class in memory, which you allocate very cheaply, which you probably do not. This is because you can use an array of records. Otherwise, the fact that it is always a reference type causes heapoverhead and -slack (fastmm, 16-byte granularity)
I would recommend not using tstringlist / tlist / tobjectlist, because deleting on very large lists (in millions) can be painful, because deleting / pasting is O (n), and pasting in the middle means shifting half the data. It gets painful somewhere between 20-100k and 1M elements, depending on how your access pattern is.
Using tlist tlists and not letting each tlist get too big is already a good workaround.
When I did this (for an OLAP cluster, when the 2 GB server memory was still $ 2000), at some point I even used the alignment bits in the pointers to store the size class of distributions. I would not recommend this :-)
Of course, a 64-bit version with FPC is also an option. I have the main part of a server above a 32-bit solution, working on a 64-bit version in less than an hour.