Hallo Holger,
thanks for the quick response!
Holger Hoffstätte <holge..izards.de> wrote:
> Do you use server-side stored procedures, or is this mostly
> a manually written loading/processing/saving process? Which
> database(s) do you use? I ask because initial support for stored
> procedures has been added to CVS since the last release.
It's Oracle in the first place, but with database independency
as a required feature (and DB2 in mind). So we consider the
stored-proc option only as a fallback in case we cannot get
a certain process fast enough without them.
>> - support for JDBC-batch updates during
>> "commitChanges"
>
> Your wish shall be granted :-)
> Batching is in CVS already and should be automagically used in the next
> version, if your JDBC driver properly supports it. It is already used with
> Oracle for a new regression test application that will be part of the next
> release as well, together with a new commitChanges that makes use of a fk
> constraint resolving framework called ashwood (see the objectstyle web
> pages). All this basically works but is just not fully integrated yet.
Cool. I can imagine that a commit engine that considers fk constraints
is already some headache, and integrating JDBC-batches adds another
constraint for the sequencing of the queries. So doing that in the most
general way can be tough (but in the application we are going to
build, the dependecies are mostly simple). I am curious to see that
working.
>> - re-use of Prepared-Statements
...
> I just had a quick look and QueryAssembler.createStatement() looks like
> the place to check for cached PreparedStatements; can't say offhand how
> much the query string creation could be optimized away, probably
> completely.
The query string creation on java side is unlikely to be a performance
hit - it is the additional turnoraound and the query compilation on the
db-server which is costly - so simply replacing the
Connection.prepareStatement() by a caching-wrapper which uses
the query string itself as a key to a hashmap should do.
>> - more detailed control of what happens
>> to the identy-map (Object-Store) in
>> "commitChanges"
>> The behaviour we need is to fully
>> release the volume data (e.g. accounts),
>> thus allowing them to be garbage-collected,
>> while keeping the master-data
>> (e.g. fund-properties) linked to the
>> identy-map.
>> (would require something like nested
>> DataContext's - or TopLinks "unitOfWork")
> Currently DataContexts are isolated from each other and don't expire
> objects automagically e.g. after a certain time of inactivity, and I think
> that's very unlikely to change until 1.0 - simply because it is very
> difficult to get right (think threading, e.g in a servlet engine).
Timeouts would be too dangerous, as well as any other
"non-deterministic" mechanism (think of Reference-Objects that
expire when memory runs low...).
The batch process typically works in a loop that would call
"commitChanges" every 100 accounts/records/customers
or the like. A simple solution would be to have the possibility
to unregister all objects that where newly registered
during such a 100-record cycle (that would be sufficient
for our processes).
But still, then "commitChanges" will also upload the changes
to the objects that are not part of this "100-record-unit-of-work",
which may not be what's intended.
So a more general solution is to have a "sub-DataContext"
that can be committed separately (to commit the changes)
and then closed (to unregister the newly created objects).
with best regards,
Arndt
______________________________________________________________________________
Die SMS direkt auf's Handy. - Die Blitz-SMS bei WEB.DE FreeMail
http://freemail.web.de/features/?mc=021165
This archive was generated by hypermail 2.0.0 : Mon Feb 17 2003 - 06:08:26 EST