On 2/2/06, Andrus Adamchik <andru..bjectstyle.org> wrote:
> A fresh confirmation that we do need to run a regular benchmark:
I think we have to be careful when we use the word benchmark... To me
(and I think foreign observers/FUD spreaders), a benchmark advocates
performance over another product... in other words, you use a
consistent baseline that conceivably someone with Hibernate could
replicate, showing one is faster, etc.
While I have no doubt that 1.1 would crush Hibernate when properly
used (and that 1.2 will when its final), I don't think the goal of
this is to even give the implication that we are faster than or slower
than some other ORM tool. That's a totally separate issue.
So, for straight performance testing, I tend to agree with the idea
that it is not nearly important for optimization as it is for
regression, and that it should be testing purely Cayenne, without any
external noise. I don't see anything wrong with using the JUnit
execution times, though I think that includes start up / tear down
time, which may cause certain tests to be skewed. I've always done
this sort of testing with custom harnesses just so I wasn't dealing
with test framework overhead... and for what it's worth, that overhead
can be relevant. I spent weeks chasing down a memory leak I found
while profiling a JUnit test just to realize it was JUnit causing the
leak.
Cris
This archive was generated by hypermail 2.0.0 : Thu Feb 02 2006 - 13:43:22 EST