> The proposal of throwing away the DataContex has the problem that
> I would lose other objects referenced by the new objects I am
> creating, but it could work.
> Removing the list of committed objects from the object store I
> think is a better solution, but how can I implement it? I can't
> find a method in DataContext neither ObjectStore to do that.
You may use ObjectStore.startTrackingNewObjects()/unregisterNewObjects
(). This should clean the ObjectStore and release memory. In the past
I may have complained about this solution as lacking the
architectural purity, but it works - that's what matters :-)
Andrus
On Dec 23, 2005, at 12:56 PM, Oscar Maire-Richard wrote:
> Yes, my problem is with the committed objects, as I want to create
> un unlimited number of records in database, for example 100,000. I
> commit every 100 objects just to be able to release the memory.
> I am starting my JVM with -Xms256m -Xmx256m (I tryed several memory
> and GC configurations but the result is the out of memory is
> produced sooner or later).
>
> The proposal of throwing away the DataContex has the problem that
> I would lose other objects referenced by the new objects I am
> creating, but it could work.
> Removing the list of committed objects from the object store I
> think is a better solution, but how can I implement it? I can't
> find a method in DataContext neither ObjectStore to do that.
>
> Many thanks,
>
> Oscar Maire-Richard
>
> Cris Daniluk wrote:
>
>> Ahhhhhh. Read wrong. Why not get the list of uncommitted objects
>> before commit, then remove that list from the object store after?
>> Discarding the context may cause tooo much to be thrown away.
>>
>> On 12/22/05, Gentry, Michael (Contractor)
>> <michael_gentr..anniemae.com> wrote:
>>
>>> It sounded to me like Oscar was doing more than 100 total, but
>>> was doing
>>> a commitChanges every 100 or so and wants to clear out the committed
>>> objects then.
>>>
>>> By the way, Oscar, what is your Java memory maximum set to?
>>> Maybe you
>>> have that set really low, too.
>>>
>>> /dev/mrg
>>>
>>>
>>> -----Original Message-----
>>> From: Cris Daniluk [mailto:cris.danilu..mail.com]
>>> Sent: Thursday, December 22, 2005 9:19 AM
>>> To: cayenne-use..bjectstyle.org
>>> Subject: Re: heap memory not released
>>>
>>>
>>> But is it the committed objects or the uncommitted objects that are
>>> causing the problem? I've had this happen when I've had tens of
>>> thousands of objects pulled into memory through funky joins, etc,
>>> but
>>> hundreds...
>>>
>>> Cris
>>>
>>> On 12/22/05, Gentry, Michael (Contractor)
>>> <michael_gentr..anniemae.com>
>>> wrote:
>>>
>>>> Can you do your commits using multiple DataContexts? That way
>>>> you can
>>>> throw away the DCs you no longer need and the CDOs can release.
>>>>
>>>> -----Original Message-----
>>>> From: Oscar Maire-Richard [mailto:omair..idsa.es]
>>>> Sent: Thursday, December 22, 2005 6:29 AM
>>>> To: cayenne-use..bjectstyle.org
>>>> Subject: heap memory not released
>>>>
>>>>
>>>> Hi all,
>>>>
>>>> I am requesting my application for generating big amount of
>>>> CayenneDataObjects in a single http request but committing every
>>>> a few
>>>> objects (i.e. 100). The problem is that for the whole long
>>>> request all
>>>> the objects created stay in the heap memory and they are not
>>>> released
>>>> for garbage collection, increasing the number until they produce a
>>>> system crash. It seems to be no dependent of cache configuration. I
>>>> tried DataRowStore.clear() unsuccessfully. I tried System.gc()
>>>> unsuccessfully, only whene I close the session the garbage
>>>> collector
>>>>
>>> is
>>>
>>>> able to remove the objects.
>>>> I am using a session bounded data context generated throw a
>>>> WebApplicationListener.
>>>>
>>>> Is there a way to force the release of committed new
>>>> CayenneDataObjects?
>>>>
>>>> Thanks in advance,
>>>>
>>>> Oscar Maire-Richard
>>>>
>>>>
>>>>
>>>>
>>>
>>
>
>
This archive was generated by hypermail 2.0.0 : Fri Dec 30 2005 - 13:02:31 EST