Re: Really large fetches

From: Andrus (andru..bjectstyle.org)
Date: Fri Jun 14 2002 - 22:16:33 EDT

  • Next message: Robert John Andersen: "Re: Really large fetches"

    At 09:38 PM 6/14/2002 -0400, Robert John Andersen wrote:
    >Is it possible to bust up fetches into segments? I'm trying to do a large
    >fetch and the jvm bombs out with an out of memory exception.

    It is interesting that I discussed the same thing with Nikhil just a few
    days ago. Looks like this is a popular issue. I guess I'd put this in as a
    feature request for BETA (ALPHA is coming out on Sunday). Here is some
    snips from those emails categorized by solutions suggested. I'd welcome
    other ideas and comments on the solutions suggested below.

    [Multithreading]:

    To address slowness of big queries:

    >Another interesting thought [...] Perform actual ResultSet processing in a
    >separate thread. Such a thread would populate a list with objects from
    >ResultSet while the list itself is already being accessed by the main
    >thread, and users can preview the first X number of result rows.

    [Paging ResultSet]:

    To address slowness and memory issues (though it will still require a
    primary key fetch):

    >[...] Basically this would work almost like faults in WebObjects - when
    >sending a query, one would set a page size = N. First N objects in the
    >query result will be returned fully initialized, the rest will be faults.
    >When an object at index "i" is accessed, a special List implementation
    >that holds the objects can check if this page is resolved, if not, objects
    >from aN <= i <= (a + 1)N will be resolved from faults. Pages don't even
    >have to be in order.

    [Trimming ResultSet]:

    Addresses memory issues. Simple but efficient way would be to set "fetch
    limit" on select queries, so that extra rows just never read from the
    result set, thus protecting the memory.

    Andrus



    This archive was generated by hypermail 2.0.0 : Fri Jun 14 2002 - 22:16:01 EDT