Peter is right in his assessment that DataContext would cache all its
loaded objects (especially since they get in a dirty state via
'deleteObject'). To solve this problem, you may use a loop counter
and commit and throw away the DataContext every X iterations (e.g.
every 1000 or so). This will clear the memory.
On top of that, if you want all 800000 deletes to be processed as a
single transaction, you may do the above, but also wrap that code in
a manual transaction:
http://cayenne.apache.org/doc/understanding-transactions.html
Andrus
On Mar 6, 2007, at 6:51 PM, Peter Schröder wrote:
> i experienced the same problem. i thinks this was because of using
>
> objectFromDataRow()
>
> which puts the object into the dataRow-cache and will result in out-
> of-memory errors.
>
> as a workaround i used the raw-object/data-row object from the query.
>
>
> -----Ursprüngliche Nachricht-----
> Von: marco turchi [mailto:marco.turch..mail.com]
> Gesendet: Dienstag, 6. März 2007 17:48
> An: use..ayenne.apache.org
> Betreff: out of memory
>
> Dear experts,
> I run a code, where I read 800.000 records from a database, but what I
> obtain is the error "out of memory: heap....".
> I use the following cayenne commands:
>
> Expression exp_date = Expression.fromString("dateRetrieval >=
> $start_d and dateRetrieval < $end_d and type=$type");//("type=$type");
> Map parameters =new HashMap();
> parameters.put("end_d", "2006-03-09" );
> parameters.put("start_d", "2005-10-07" );
> parameters.put("type", "Job");
> exp_date = exp_date.expWithParameters(parameters);
> final SelectQuery feedsquery = new SelectQuery(FeedsAll.class,
> exp_date);
> int count=0;
> try{
>
> final ResultIterator it_q = context.performIteratedQuery
> (feedsquery);
> while(it_q.hasNextRow()){
>
> final Map row = it_q.nextDataRow();
> final FeedsAll obj = (FeedsAll)
> context.objectFromDataRow(FeedsAll.class, new DataRow(row), true);
> .....
> context.deleteObject(obj);
> }
> }
> catch(Exception e){
> //System.out.println("Fatal Error: "+e);
> log.error("Fatal Error: ",e);
> log.info(e.getStackTrace());
> }
>
> what is it wrong? I understood that using the performIteratedQuery, I
> could read a huge number of records without memory problem.
> Can you help me?
>
> Thanks a lot
> Marco
>
This archive was generated by hypermail 2.0.0 : Tue Mar 06 2007 - 11:59:28 EST