Your table may be something like
CREATE TABLE ArticleText ( INTEGER artId, INTEGER wordNum, INTEGER wordId, PRIMARY KEY (artId, wordNum), FOREIGN KEY (artId) REFERENCES Articles, FOREIGN KEY (wordId) REFERENCES Words )
this, of course, can be very expensive or slow, etc., but you will need some measurements to determine this (since it depends on your DB mechanism). By the way, I hope that it is clear that the article table is just a table with metadata in articles with the artId key, and the Words table is the table of all words in each article with the wordId key (trying to save some space there, identifying already known words when the article is entered , if possible...). One special word should be a โend of paragraphโ marker, easily identifiable as such and distinct from every real word.
If you structure your data like this, you get more flexibility when searching on a page, and the page length can be changed, if you like, even by querying it. To get the page:
SELECT wordText FROM Articles JOIN ArticleText USING (artID) JOIN Words USING (wordID) WHERE wordNum BETWEEN (@pagenum-1)*@pagelength AND @pagenum * @pagelength + @extras AND Articles.artID = @articleid
@pagenum , @pagelength , @extras , @articleid should be inserted into the prepared query during the query (use any syntax of your database and language, for example :extras or numbered parameters or whatever).
So, we get the words @extras beyond the expected end of the page, and then on the client side we check these additional words to make sure that one of them is a marker of the final paragraph - otherwise we will make another request (with different BETWEEN values) to get more.
From the ideal, but considering all the problems that you have identified, it is worth considering. If you can count on a page length that is always, for example, a few out of 100, you can accept a small change to this based on fragments of 100 words (and the Words table, only text stored directly in the line).