RE: Why don't we use version-based optimistic locking?

From: Gentry, Michael \(Contractor\) ("Gentry,)
Date: Mon Aug 29 2005 - 16:42:23 EDT

  • Next message: Gili: "Thank you"

    You should almost never optimistically lock on a BLOB (I mention this in
    the link I posted). It is far too expensive. BLOBs are usually kept in
    a separate table by design, too, so that you can fetch the meta/BLOB
    information without loading the entire BLOB (easy enough to fault it in
    if you need it). This is faster for summary type pages so you can
    display basic information about something without incurring the cost of
    loading the BLOB until they hit a detail page. Also, you can edit data
    on the summary page without having to stream the BLOB back to the
    database.

    If you do need a comparison check on a BLOB, compute a checksum (MD5 or
    whatever) of the BLOB data whenever you change it. The checksum will be
    small and you can optimistically lock on it instead of the BLOB. This
    approach should scale fine and should be sufficient to catch optimistic
    locking issues.

    I'm definitely more sensitive to "unsafe" operations because most of
    what I deal with involves money/account information/etc. The integrity
    of the data is much more important than shaving a few milliseconds off
    an update statement. And yes, you do have to be careful using
    SQLTemplate. As they say, with great power comes great responsibility.

    /dev/mrg

    -----Original Message-----
    From: Gili [mailto:cowwo..bs.darktech.org]
    Sent: Monday, August 29, 2005 3:57 PM
    To: cayenne-use..bjectstyle.org
    Subject: Re: Why don't we use version-based optimistic locking?

            I have a table "image" in my database. One of the columns is a
    blob for
    containing the image data (500k to 2MB). Using the current approach, not

    only will memory usage be extremely high but also commiting will be
    extremely slow because we'll have to now compare the value of the blob.
    I don't think adding streaming blobs will help here either because the
    current optimistic locking mechanism requires us to read and compare the

    full contents of the field anyway.

            Yes, I see your point regarding the danger if the table is
    modified
    using an external tool but I guess the assumption is that the
    performance benefits of version-based optimistic locking far outweigh
    the potential safety issues. You just have to ensure to use your ORM for

    all your transaction or if you decide on using "unsafe" operations
    (SQLTemplate or other external methods) you must be aware of the
    potential consequences.

            It is likely we're coming at this from different requirements
    though.
    I'm really concerned about scalability issues with Cayenne because I
    plan on dealing with a massive amounts of images streamed from my DB
    while your average webapp operations do not involve this amount of data.

    Gili

    Gentry, Michael (Contractor) wrote:
    > Well, I personally prefer the way Cayenne does optimistic locking. I
    > don't want to lock on a meaningless piece of data. Let's face it,
    which
    > data is most important, the user-entered purchasePrice or somewhat
    > arbitrary recordVersionNumber? It is far to easy to update a record
    (in
    > a production support role or external database utility, etc) and
    forget
    > to increment the version, which could have bad consequences as a
    result.
    >
    > In your "flush to database" comment, that's where you would be doing a
    > dataContext.commitChanges(). This starts a transaction, flushes
    changes
    > to the database, and ends the transaction. At this point, assuming it
    > succeeded, the dataContext is in sync with the database. Rolling back
    > from here shouldn't really doing anything (you are back as far as you
    > can go). With nested data contexts (not sure how close this is to
    being
    > functional), you'll be able to commit changes in a child data context
    to
    > a parent data context, which will still allow you to rollback the
    parent
    > to the pre-commit of the child changes (I think -- Mike/Andrus correct
    > me if I am wrong there).
    >
    > There's not a lot here, but perhaps it would help a bit?
    >
    >
    http://www.objectstyle.org/confluence/display/CAY/Optimistic+Locking+Exp
    > lained
    >
    > Caching the original database value is pretty important to how this
    > works. Yes, it takes more memory, but is vastly more safe.
    >
    > /dev/mrg
    >
    >
    > -----Original Message-----
    > From: Gili [mailto:cowwo..bs.darktech.org]
    > Sent: Monday, August 29, 2005 2:32 PM
    > To: cayenne-use..bjectstyle.org
    > Subject: Why don't we use version-based optimistic locking?
    >
    >
    > Hi,
    >
    > Just curious why we chose to implement optimistic locking like
    > we did.
    > The reason I ask is that I want to be able to:
    >
    > add 1000 objects
    > flush to database
    > add 1000 objects
    > ...
    > many objects later...
    > dataContext.commit()
    >
    > now, I should be able to dataContext.rollback() at any time and
    > this
    > should undo all changes all the way back to the beginning of the
    > context. I've been talking to Mike on IRC and he says that to his
    > knowledge it is unlikely we can implement the above behavior because
    > right now optimistic locking caches the original attribute value so
    that
    >
    > at commit time we can compare it to the DB version and throw an
    > exception if optimistic locking failed. This incurs heavy memory
    usage.
    >
    > Now, if we were only remembering a version/timestamp per row, it
    > would
    > be much easier to implement this. I ask because Hibernate can already
    > support this behavior using this code:
    >
    > // execute 1000 times
    > session.saveOrUpdate(object);
    > ...
    > session.flush();
    > session.clear();
    > ...
    > // many objects later
    > ...
    > session.commit() or session.rollback() will go all the way past the
    > session.flush()/clear() calls.
    >
    > I am sorry for all these questions but I am rather new to all of
    > this :)
    >
    > Thank you,
    > Gili

    -- 
    http://www.desktopbeautifier.com/
    



    This archive was generated by hypermail 2.0.0 : Mon Aug 29 2005 - 16:42:27 EDT