No, I didn't say that. I said that the behavior existed before
optimistic locking, and that OL depends on it. There are reasons for
it beyond optimistic locking, but I'll let someone else explain what
those reasons are.
As for why OL was implemented as full-attribute vs timestamps vs
versions, it's because I needed full-attribute OL, and I didn't need
timestamp or version OL. Patches are always welcome to support the
other two kinds :)
That still won't help you with your memory footprint issues, though.
On 8/29/05, Gili <cowwo..bs.darktech.org> wrote:
> Hi,
>
> Just curious why we chose to implement optimistic locking like we did.
> The reason I ask is that I want to be able to:
>
> add 1000 objects
> flush to database
> add 1000 objects
> ...
> many objects later...
> dataContext.commit()
>
> now, I should be able to dataContext.rollback() at any time and this
> should undo all changes all the way back to the beginning of the
> context. I've been talking to Mike on IRC and he says that to his
> knowledge it is unlikely we can implement the above behavior because
> right now optimistic locking caches the original attribute value so that
> at commit time we can compare it to the DB version and throw an
> exception if optimistic locking failed. This incurs heavy memory usage.
>
> Now, if we were only remembering a version/timestamp per row, it would
> be much easier to implement this. I ask because Hibernate can already
> support this behavior using this code:
>
> // execute 1000 times
> session.saveOrUpdate(object);
> ...
> session.flush();
> session.clear();
> ...
> // many objects later
> ...
> session.commit() or session.rollback() will go all the way past the
> session.flush()/clear() calls.
>
> I am sorry for all these questions but I am rather new to all of this :)
>
> Thank you,
> Gili
> --
> http://www.desktopbeautifier.com/
>
This archive was generated by hypermail 2.0.0 : Mon Aug 29 2005 - 14:43:56 EDT