Best way to swap big results - Java

I'm looking for the best aproach in terms of performance, to show the Resultset on the web page partially, say, 10 points per page, and if the user wants to see more results, he clicks "next" btn. I think (probably wrong), should it be a new request to the server when the "Next" button is clicked?

currentlly I'm trying to learn Java, GWT

Thank you!

PS: sorry for my english.

+4
source share
6 answers

The answer will depend on the behavior of your users: how often page 2 or page 10 or page 100 will look.

If they rarely look at page 2 and never look at page 10 or page 100, resubmitting the request can be great.

If they usually look at page 2, often look at page 10 and sometimes look at page 100, then a partial cache will be useful: cache the first 100 (or 200 or 300) results and resubmit the request when they pass these results. I would probably keep the cache in a user session, although you need to think about if your application server is a cluster.

And if they always look at each result? Partial caches are still the answer because you do not want to store large chunks of data in memory.

+3
source

Since you have "GWT" in your tags, I assume that your server application is running in the Google App Engine (GAE).

  • One approach is to have your first query get all the results, save them in the database, display the first 20, and then let the next / previous links pull out the subsets of the saved data from the database. You must remember to remove these results from the database when a user session fails!

  • Another approach is to get all the results on every pageview, but skip the results until you hit the desired subset of 20 and only display those.

I think the second approach will work better with GAE from below, unless your query returns more than 1000 results that GAE will not allow you to receive in one transaction.

  • The best approach if your data and keys lend themselves to it is to output the correct 20 data elements already at the time of the request. But if your data will not continuously increment entire keys, it can be difficult to do.
+1
source

Usually you get only the "page" from the database.

Say the request

select * from mytable where column1="a"; 

will give 1000 records. Then getting the page will look like (mysql):

 select * from mytable where column1="a" limit 0, 10; 

for page 1 (from 0 to 10), and page 2 will be restored as follows:

 select * from mytable where column1="a" limit 10, 20; 

etc. If the data is large (1000 records) but not huge (1000 000 records), you can also immediately provide the entire data set and use javascript on the page. This has the added benefit that sorting can be done on the client side.

0
source

If you cannot use the cache approach due to memory limitations, use a query based approach. Adjust the WHERE clause in your search query to explicitly select data based on the page the user requested. This approach requires you to pass additional contextual information back and forth on the requests of your page.

One approach is to fetch pages using logical row identifiers (or primary keys) that limit the page and define each row in the result set.

Say you have a very simple table with a numerical sequence of row identifiers. If you are showing 100 lines per page and the user has requested a second page, you should adjust the WHERE clause as follows:

 select col, col2 from my_table where row_id > 100 and row_id <= 200 order by rownum asc 
0
source

you can cache / retrieve records at the web level, backend level (for example, ejb) or at the database level (as a last “restriction” or row_id statement). which you should use depends on your requirement (as kdgregory says).

The most popular is caching them at the web level using a session.

0
source

If you use JPA (which works well in GAE), you can paginate the result with

Request # setFirstResult (int startPosition)

Request # setMaxResults (int maxResult)

This article may be useful: Paging large datasets with LazyList

0
source

All Articles