Hi,
I just had a deeper look at cayenne trying
to find out whether we could use it for
our project (a large back-office system).
I got through the examples quickly and it
was really nice to see that everything worked
as expected (alpha-6).
So, for the interactive part of our system
I'm sure cayenne would do the job.
For the offline processing, however, we
have high volumes (millions) and tight
performance constraints, so
we cannot deviate much from plain jdbc's
performance.
The features needed for that would be
(and I think they are not implemented,
or at least I didn't find them):
- support for JDBC-batch updates during
"commitChanges"
- re-use of Prepared-Statements
- more detailed control of what happens
to the identy-map (Object-Store) in
"commitChanges"
The behaviour we need is to fully
release the volume data (e.g. accounts),
thus allowing them to be garbage-collected,
while keeping the master-data
(e.g. fund-properties) linked to the
identy-map.
(would require something like nested
DataContext's - or TopLinks "unitOfWork")
Could some of the gurus tell me if you
have plans in that direction, or if
I just missed something?
thanx in advance,
Arndt
PS: I also evaluated toplink, but that failed
because they support batch-writing, but messed
it up, and also because their concept of
data-comparisons (to find out what changed)
when commiting a unitOfWork turned out to
be a cpu-time grave :-(
This archive was generated by hypermail 2.0.0 : Sun Feb 16 2003 - 04:56:29 EST