Hi Arndt,
Glad to hear from you again. :-)
On Tuesday, September 16, 2003, at 12:51 PM, Arndt Brenschede wrote:
> - the return-value(s) of the execute(Batch) methods
> are always "-2", which is still JDBC complient ( )
> and which is docuemented by Oracle
>
> - the alternative "getUpdateCount()" works mostly.
Just did a quick JDBC app to test this out. Everything works exactly
like you said on Oracle. On Sybase and MySQL though executeBatch
returns correct counts (though there is a question of whether Sybase
and MySQL batching gives any performance benefit, or is simply a cover
for individual "executeUpdate" queries).
>> Can the framework start off by doing it the "right" way (using the
>> JDBC update count), and we go from there? We can document (when
>> known) what drivers DON'T work and under what conditions, then let
>> those most interested either get the driver itself fixed or fix the
>> driver plugin (or maybe even disable batch updates by plugin)?
In the light of what Arndt said and my little test, we can indeed
implement optimistic locking the way we planned. Just mention in the
documentation that "batch updates" must be off for Oracle. This is an
easy setting at the adapter level, and DataNode already supports both
modes of operation (see DataNode.runBatchUpdateAsBatch() and
DataNode.runBatchUpdateAsIndividualQueries()). Batching can be even a
setting on the DataNode that can be set from the Modeler. Oracle users
who need every little bit of performance may turn on batch updates
(thus disabling optimistic locking) and vice versa.
Andrus
This archive was generated by hypermail 2.0.0 : Tue Sep 16 2003 - 13:45:17 EDT