I was only really interested in benchmarking Cayenne 1.2 against Cayenne
1.1 or Cayenne 1.0. Obviously, some features are new and need to
establish new baselines.
I personally don't care about how it compares, speed-wise, to Hibernate
or any other ORM. OK, maybe I do just a tad, but not really. I'm not
16 years old anymore and enthralled with how many cycles the processor
takes to execute a branch not equal instruction. As long as Cayenne
performs well for the features it has, I'm pretty happy.
Linux has (had? I haven't kept up with Linux since I went back to
NeXTstep) the BogoMIP. Maybe we need something like a CayenneMark.
Time to insert 1000 objects using Cayenne 1.0 equals 1 CayenneMark.
Etc.
I'd also argue we should run tests against HSQL. Simpler, can do it all
in memory (take disk latency out of the picture), etc.
I'm rambling. Maybe I'll have more time to be coherent later. :-)
/dev/mrg
-----Original Message-----
From: Andrus Adamchik [mailto:andru..bjectstyle.org]
Sent: Thursday, February 02, 2006 2:01 PM
To: cayenne-deve..bjectstyle.org
Subject: Re: Cayenne performance testing
I agree that the word benchmark is wrong and opens us to all kinds of
FUD. So what do we call it then? And should we make it closed source?
(suggesting this only half jokingly)
Andrus
On Feb 2, 2006, at 1:43 PM, Cris Daniluk wrote:
> On 2/2/06, Andrus Adamchik <andru..bjectstyle.org> wrote:
>> A fresh confirmation that we do need to run a regular benchmark:
>
> I think we have to be careful when we use the word benchmark... To me
> (and I think foreign observers/FUD spreaders), a benchmark advocates
> performance over another product... in other words, you use a
> consistent baseline that conceivably someone with Hibernate could
> replicate, showing one is faster, etc.
>
> While I have no doubt that 1.1 would crush Hibernate when properly
> used (and that 1.2 will when its final), I don't think the goal of
> this is to even give the implication that we are faster than or slower
> than some other ORM tool. That's a totally separate issue.
>
> So, for straight performance testing, I tend to agree with the idea
> that it is not nearly important for optimization as it is for
> regression, and that it should be testing purely Cayenne, without any
> external noise. I don't see anything wrong with using the JUnit
> execution times, though I think that includes start up / tear down
> time, which may cause certain tests to be skewed. I've always done
> this sort of testing with custom harnesses just so I wasn't dealing
> with test framework overhead... and for what it's worth, that overhead
> can be relevant. I spent weeks chasing down a memory leak I found
> while profiling a JUnit test just to realize it was JUnit causing the
> leak.
>
> Cris
>
This archive was generated by hypermail 2.0.0 : Thu Feb 02 2006 - 14:23:04 EST