Short version of my question:
If I keep the cursor link to the astronomically huge result given in my client code, would it be ridiculous (that is, to completely defeat the cursor point) to issue "FETCH ALL FROM cursorname" as my next command? Or will it slowly transfer data to me when I use it (at least in principle, believing that I have a well-written driver sitting between me and Postgres)?
More details
If I understand things correctly, then Postgres cursors are REALLY to solve the following problem [even if they can be used (abused?) For other things, for example, returning several different result sets from one function]:
Note. The current implementation of RETURN NEXT and RETURN QUERY saves the entire result set before returning from the function, as discussed above. This means that if the PL / pgSQL function creates a very large set of results, the performance may be low: the data will be written to disk to avoid running out of memory, but the function will not be there until the entire result set has been formed.
(ref: https://www.postgresql.org/docs/9.6/static/plpgsql-control-structures.html )
But (again, if I understand correctly), when you write a function that returns a cursor, then the entire request is NOT buffered into memory (and disk) before the user of the function can start consuming anything, but instead the results can be consumed in parts . (There are more overhead settings with the cursor, but it's worth it to avoid massive buffer allocations for very large result sets.)
(ref: https://www.postgresql.org/docs/9.6/static/plpgsql-cursors.html#AEN66551 )
I would like to understand how this applies to SELECTS and FETCHES through a wire to a Postgres server.
In all cases, I'm talking about consuming results from client code that communicates with Postgres on a socket behind the scenes (actually, using the Npgsql library in my case).
Q1: What if I try to execute "SELECT * FROM AstronomicallyLargeTable" as my only Postgres posting command? Would it allocate all the memory for the whole selection and then start sending data to me? Or does it (efficiently) generate its own cursor and a little sweat the data back (without a huge extra buffer allocation on the server)?
Q2: What if I already have a cursor link to an astronomically large result set (say, because I already made one round trip and returned the cursor link from some function), and then I execute "FETCH ALL FROM cursorname "through the wire to Postgres? Is this stupid because it will allocate ALL memory for all results on the Postgres server before sending anything back to me? Or does "FETCH ALL FROM cursorname" actually work as I would like, streaming data streams slowly when I consume them, without any massive buffer allocation on the Postgres server?
EDIT: Further Clarification
I ask about the case when I know that my level of data access transfers data from the server to me one line at a time (so that it does not have a large number of buffers on the client side, no matter how long data streams are) and where I I also know that my own application consumes data one line at a time, and then discards it (so there are no buffers on the client side). I definitely DO NOT want to fetch all of these strings into client-side memory, and then do something with them. I see that it will be absolutely stupid!
So, I think that all the problems (for the use case just described) is how long PostgreSQL will take the threads and how much memory buffer it would allocate for FETCH ALL . IF (and this is a big "IF" ...) PostgreSQL does not allocate a huge buffer of all rows before running, and if it passes the rows back to Npgsql one at a time, starting fast, then I believe (but please tell me why / if I'm wrong) that there is still an explicit use case for FETCH ALL FROM cursorname !