Re: OutOfMemoryError: reading a large number of objects one by one

From: Derek Rendall (derek.rendal..mail.com)
Date: Wed May 16 2007 - 07:50:02 EDT

  • Next message: Tomi N/A: "Re: OutOfMemoryError: reading a large number of objects one by one"

    My patch seemed to scale well - it got to about (I think) 40 Meg or so then
    stabilized with only nominal growth from there. I did not test much higher
    than 100 K records as I did not need to. I guess the size will be related to
    how big each record is when represented as a data row * number of rows.

    I think that this is a pretty common "problem" to ORM tools. When I looked
    at it a couple of years ago, it appeared to be the case that neither
    Hibernate nor Kodo JDO addressed this issue with any better approach. I did
    not check TopLink. ORM tools tend to focus on simplifying user tasks rather
    than batch type tasks.

    Note: some people will advocate sitting such logic on top of a standard JDBC
    result set (I'm not commenting one way or another ;-). Thats really the only
    way to avoid loading at least something for each record up front.

    Also, you should probably start tracking new objects before the while loop
    as well (for the first 100 :-)

    Derek

    On 5/16/07, Andrus Adamchik <andru..bjectstyle.org> wrote:
    >
    >
    > On May 15, 2007, at 12:47 AM, Tomi N/A wrote:
    >
    > > Reduced the max number of objects to 1000. The result? A NPE at:
    > > for (MyClassC mcc :
    > > (List<MyClassC>)mca.getToMyClassC().getToParentClass
    > > ().getMyClassCArray())
    > > {
    >
    > Ok, so the cache size will have to be big enough to hold all resolved
    > objects within the lifetime of a context. So let's try another
    > strategy. Return the max objects back to 10000 and uncheck "use
    > shared cache" for the DataDomain.
    >
    > If this doesn't work, I suggest to run the app in profiler to see
    > exactly how objects are allocated and collected.
    >
    > > The database referential integrity ensures there can be no nulls if
    > > (mcc != null), which it is.
    > > As far as -Xmx is concerned, it's at it's default value (64M), which
    > > should be several times more than necessary for the job.
    >
    > Agreed - the default 64m should be enough if there's no leaks.
    >
    > Andrus
    >



    This archive was generated by hypermail 2.0.0 : Wed May 16 2007 - 07:50:41 EDT