Ah, right... PaginatedQuery. I had completely forgotten about it.
*ponder* I believe the way this query works has changed since I first
tried to use it, right?
It seems to me that it used to be that once the objects for a
particular page were fetched, they were cached? So if you, say,
fetched 10 pages worth of data, all of that data would still be
cached. But I seem to recall that it will now swap pages of data in
and out of the cache? If that's the case, it /might/ work for at
least the majority of cases. There are still cases when even
paginated query requires too much memory*, but those are going to be
"special requirements", anyway, so... fair enough. I'll see if
PaginatedQuery will solve the dilemma.
Robert
* consider a case I encountered recently of needing to create a page
that allows you to paginate through all data (potentially) of a table
with 800,000 records in it, in a fairly memory-constrained
environment; even storing only the int pks with no additional
overhead, that's 25 megs of memory. It was better to have the
additional overhead of running the "count" query to determine total
result size, and then custom-fetch pages as required.
On Mar 7, 2009, at 3/712:26 AM , Andrey Razumovsky wrote:
> Hi Robert,
>
> What's the point of query counting the number of results if you're
> using
> fetch limit & fetch offset? This way another SQL statement (Select
> count(*))
> would be neccesary.
> I don't know about Tapestry, but recently I've done same thing for
> GWT-Ext
> week ago. When I first open my table, the query and its result
> processor
> (simple interface) are cached on server side is session using
> query's cache
> key. The query is paginated, and, to get some data client needs only
> to send
> the key, offset and limit. Total count is defined simply by getting
> paginated query result List's size and is sent to client in header of
> response. There are some caching issues still to care about, but
> generally
> it works fine!
> So I turned to paginated queries instead of fetch limits and
> offsets. Will
> that fit your case?
>
> Andrey
>
> 2009/3/7 Robert Zeigler <robert.zeigle..mail.com>
>
>> I've been working on the tapestry/cayenne integration. One thing
>> that
>> would be nice is to have automatic "conversion" from query to
>> "GridDataSource" (the model backing tapestry's Grid component), so
>> that you
>> could simply define a query and pass that to the grid to have the
>> results
>> displayed, paginated, etc.
>>
>> I have a first pass of this working by simply executing the query
>> to fetch
>> the list of objects and allowing tapestry to convert the list into a
>> GridDataSource. This works... for small lists. But certainly won't
>> scale
>> for anything large. GridDataSource provides the hooks required to
>> select a
>> "page" of data at a time. And queries now allow for setting fetch
>> limits
>> and offsets, which allows me to generically program this to handle
>> many
>> (most?) common use-cases. So, I'm 90% there.
>>
>> But there's still one thing lacking. The GridDataSource has to tell
>> tapestry how many rows are available, total.
>>
>> For the many cases* (those that don't involve row aggregation),
>> this can be
>> accomplished via a simple count statement; ie, executing /nearly/
>> the same
>> query... but selecting the count of the records rather than the
>> actual
>> properties.
>>
>> So I got to thinking that maybe there would be a way to take an
>> existing
>> query and "tweak" it to perform a counting version of its query.
>> Thoughts
>> on this approach?
>>
>>
>> * This breaks down if the query is doing any kind of grouping. I'm
>> not
>> aware of a standard way to ask how many rows a particular query /
>> would/ in
>> this case. Anybody? On mysql, one could set the fetch limit to 1,
>> perform
>> the query, and then perform a query for "found_rows()", but that's
>> not
>> applicable anywhere but mysql, so...
>>
>> Robert
>>
This archive was generated by hypermail 2.0.0 : Sat Mar 07 2009 - 10:10:53 EST