RE: Why don't we use version-based optimistic locking?

From: Gentry, Michael \(Contractor\) ("Gentry,)
Date: Mon Aug 29 2005 - 15:42:56 EDT

  • Next message: Gili: "Re: Why don't we use version-based optimistic locking?"

    Well, I personally prefer the way Cayenne does optimistic locking. I
    don't want to lock on a meaningless piece of data. Let's face it, which
    data is most important, the user-entered purchasePrice or somewhat
    arbitrary recordVersionNumber? It is far to easy to update a record (in
    a production support role or external database utility, etc) and forget
    to increment the version, which could have bad consequences as a result.

    In your "flush to database" comment, that's where you would be doing a
    dataContext.commitChanges(). This starts a transaction, flushes changes
    to the database, and ends the transaction. At this point, assuming it
    succeeded, the dataContext is in sync with the database. Rolling back
    from here shouldn't really doing anything (you are back as far as you
    can go). With nested data contexts (not sure how close this is to being
    functional), you'll be able to commit changes in a child data context to
    a parent data context, which will still allow you to rollback the parent
    to the pre-commit of the child changes (I think -- Mike/Andrus correct
    me if I am wrong there).

    There's not a lot here, but perhaps it would help a bit?

    http://www.objectstyle.org/confluence/display/CAY/Optimistic+Locking+Exp
    lained

    Caching the original database value is pretty important to how this
    works. Yes, it takes more memory, but is vastly more safe.

    /dev/mrg

    -----Original Message-----
    From: Gili [mailto:cowwo..bs.darktech.org]
    Sent: Monday, August 29, 2005 2:32 PM
    To: cayenne-use..bjectstyle.org
    Subject: Why don't we use version-based optimistic locking?

    Hi,

            Just curious why we chose to implement optimistic locking like
    we did.
    The reason I ask is that I want to be able to:

    add 1000 objects
    flush to database
    add 1000 objects
    ...
    many objects later...
    dataContext.commit()

            now, I should be able to dataContext.rollback() at any time and
    this
    should undo all changes all the way back to the beginning of the
    context. I've been talking to Mike on IRC and he says that to his
    knowledge it is unlikely we can implement the above behavior because
    right now optimistic locking caches the original attribute value so that

    at commit time we can compare it to the DB version and throw an
    exception if optimistic locking failed. This incurs heavy memory usage.

            Now, if we were only remembering a version/timestamp per row, it
    would
    be much easier to implement this. I ask because Hibernate can already
    support this behavior using this code:

    // execute 1000 times
    session.saveOrUpdate(object);
    ...
    session.flush();
    session.clear();
    ...
    // many objects later
    ...
    session.commit() or session.rollback() will go all the way past the
    session.flush()/clear() calls.

            I am sorry for all these questions but I am rather new to all of
    this :)

    Thank you,
    Gili

    -- 
    http://www.desktopbeautifier.com/
    



    This archive was generated by hypermail 2.0.0 : Mon Aug 29 2005 - 15:43:00 EDT