Good point. So in other words one common scenario is mass changes of
tabular data.
Then we still need to preserve batching, and I guess will have to
partition update batches further depending on null positions in the
WHERE clause.
Andrus
On Mar 21, 2004, at 3:08 AM, Andriy Shapochka wrote:
>>
>> My take on it is that while INSERT and DELETE batches are quiet
>> useful,
>> UPDATE batches have very small size in most cases anyway. Anyone can
>> think of a common scenario when there is a need to update a large
>> number of objects at the same time (and with optimistic locking) that
>> fit in the same query batch template?
>>
>> Andrus
>
> Imagine a spreadsheet app with a table widget (JTable) where a row
> corresponds to a data object (say, a sort of ketchup) and a column
> corresponds to one of its properties. Auto-commit is disabled. You
> want to
> edit prices of some sorts of ketchup in stock, and then commit the
> changes
> to the database as a bulk update (and some people like to work half a
> day
> without saving the results, as indiscreet as it is, so you will have
> thousands of data rows liable to get updated). One more example, there
> is a
> 10% increase in prices of all the sorts of ketchup and the app must
> adjust
> them accordingly and update the table in the database with one commit.
> If
> one desires to protect the tomato-derived goods with optimistic locks
> per
> item (they are rather independent from each other after all, those
> ketchups)
> the case immediately falls into the category of batch updates with
> optimistic locking. Similar scenarios are not uncommon, I believe.
> What do
> you think?
>
> Andriy.
This archive was generated by hypermail 2.0.0 : Sun Mar 21 2004 - 12:33:02 EST