Howdy, all.
I was running JProfiler on an app of mine, and ran into memory
problems. Here is what I found out with an hour's profile. If you
guys would find it useful for optimization purposes, I would be glad to
take a few more profiles. (I have a workaround for my app; consider
this some info to help Cayenne handle larger numbers of uncommitted
objects.)
My app creates roughly 45,000 StarSystem objects, each with three
persisted floats, a persisted varchar(100) for a name, and a UID, as
well as 150 Player objects. It never gets to the commit phase, because
it runs out of RAM. To get there, it took a good ten minutes, leading
me to expect 20+ minute create times for all 45,000 objects.
I thus ran a profile at 18,996 objects created to see what it was
doing. Below is a chart of how many objects it had created, as well as
how much ram they took up.
Class Alloc RAM
Char[] 49,192 7,251,488
HashMap$Entry 190,586 4,574,064
Class[] 31,889 2,830,008
String 91,722 2,201,328
ToManyList 76,136 1,827,264
Float 62,412 998,592
HashMap 19,154 766,160
TempObjectId 19.147 459,904
StarSystem 18,996 455,904
[...]
Player 150 3,600
StarSystem and Player are the only persitable objects created.
In the time profile, 83.1% of the time went to 18.853 invocations of
org.objectstyle.cayenne.access.DataContext.createAndRegisterNewObject.
Of that, only 0.3% was used to actually allocate the 18853 StarSystem
objects, so the cpu time is really going into
createAndRegisterNewObject.
My experience has been that when you create, and perhaps delete,
megabytes of objects, this is often causes time problems, if nothing
else, because of how the garbage collector works when it gets pummeled.
I have not been able to get finer detail on where the allocations and
such are occurring, as I believe the standard cayenne jar is compiled
without the debug info. If this sounds like it might give the
developers some optimization ideas, I would be glad to carry this
further.
(The workaround of committing every hundred rows worked well, and made
the total create and commit time drop to 3:30. Committing every
thousand took 136s, and every five thousand took 142s, so I suspect
that there is a sweet spot where commit overhead is just barely
exceeded by object creation time somewhere around three thousand
objects. I suspect we can raise that, with a bit of care, but I am not
familiar enough with the code to do much about it myself. Thus - does
this help those who _do_ know what it is doing deep inside?)
Scott
This archive was generated by hypermail 2.0.0 : Wed Jan 28 2004 - 20:49:08 EST