Right, Mike just pointed out to me this concept won't work. What I
*meant* to say is this...
... register a lots of new objects ...
... flush objects to DB-layer of transaction ...
... repeat 1000 times ...
... commit/flush the transaction ...
The benefit here is that you're flushing your memory state to the DB so
you can fit and more and objects into memory in Cayenne now and you can
still rollback the entire thing (past-flush) if you so wish.
The downside is that Cayenne simply does not work this way because it
would require us to initialize a transaction when flushing the state to
the DB and keep it open until commit/rollback time.
Anyway, I guess I'll leave it at all. I'm not a DB expert. I'm not sure
which approach is preferable here. I just know that when working with
Hibernate I consistently had to flush the state to the DB every X object
additions or else I'd run out of memory. If I didn't have such
functionality in Cayenne I really don't know what I would do. I could
split up the operation into multiple commits like you mentioned but the
operation is really meant to be a single transaction... and I do really
want it to rollback fully in case of any failure along the way.
ResultIterator will help you for reading a lot of objects from the
database, but it won't help you if you're adding a lot of objects in the
first place.
Gili
Andrus Adamchik wrote:
> Not sure how optimistic locking got into this mix, but DataContext
> simply doesn't operate within an open transaction. It is
> "disconnected", period.
>
> So if you are concerned about memory use, maybe you should focus on
> streaming BLOB implementation instead? Applications that have 30-40
> thousands of objects (without LOBs) in the DataContext are not that
> uncommon.
>
> Also check out ResultIterator, maybe it'll fit your use patterns:
>
> http://objectstyle.org/cayenne/userguide/perform/result-iterator.html
>
> Andrus
>
>
>
> On Aug 29, 2005, at 2:31 PM, Gili wrote:
>
>> Hi,
>>
>> Just curious why we chose to implement optimistic locking like we
>> did. The reason I ask is that I want to be able to:
>>
>> add 1000 objects
>> flush to database
>> add 1000 objects
>> ...
>> many objects later...
>> dataContext.commit()
>>
>> now, I should be able to dataContext.rollback() at any time and
>> this should undo all changes all the way back to the beginning of the
>> context. I've been talking to Mike on IRC and he says that to his
>> knowledge it is unlikely we can implement the above behavior because
>> right now optimistic locking caches the original attribute value so
>> that at commit time we can compare it to the DB version and throw an
>> exception if optimistic locking failed. This incurs heavy memory usage.
>>
>> Now, if we were only remembering a version/timestamp per row, it
>> would be much easier to implement this. I ask because Hibernate can
>> already support this behavior using this code:
>>
>> // execute 1000 times
>> session.saveOrUpdate(object);
>> ...
>> session.flush();
>> session.clear();
>> ...
>> // many objects later
>> ...
>> session.commit() or session.rollback() will go all the way past the
>> session.flush()/clear() calls.
>>
>> I am sorry for all these questions but I am rather new to all of
>> this :)
>>
>> Thank you,
>> Gili
>> --
>> http://www.desktopbeautifier.com/
>
>
>
-- http://www.desktopbeautifier.com/
This archive was generated by hypermail 2.0.0 : Mon Aug 29 2005 - 15:06:25 EDT