Or actually, client can be a problem as well. Each ClientChannel that
you create would start an EventManager. So you may want to do
ClientChannel.getEventManager().shutdown() before a ClientChannel goes
out of scope.
Andrus
On May 1, 2010, at 1:09 PM, Andrus Adamchik wrote:
> I don't think running things on the same server vs. separate server
> is an issue. Hw is your business tier implemented though? Is that an
> EJB or a web app? Something in its lifecycle makes Cayenne start
> multiple times and this is what you need to figure out.
>
> Andrus
>
> On Apr 30, 2010, at 11:14 AM, Victor Leung wrote:
>
>> Hi Andrus,
>>
>> Thank you for your prompt and insightful reply!
>>
>> Looking through the thread dumps as suggested, it definitely seems
>> like we
>> are starting multiple Cayenne stacks / EventManagers -- on point
>> (1), the
>> number after the last dash is rather low, and on point (2), the
>> address of
>> the lock are pretty much all different.
>>
>> In our current test environment (where the problem is occurring),
>> we have
>> deployed both the web tier and the business tier as separate EARs
>> on the
>> same server. As mentioned earlier, communication between the two
>> tiers is
>> through the use of Cayenne Web Service. Is this a supported
>> configuration,
>> or do we need to deploy the two EARs on separate servers?
>>
>> Thanks again,
>> Vic
>>
>>
>> On Fri, Apr 30, 2010 at 11:47 AM, Andrus Adamchik <andru..bjectstyle.org
>> >wrote:
>>
>>> Hi Vic,
>>>
>>> There are two possible explanations - EventManager leaking threads
>>> (seems
>>> rather unlikely) or you are starting multiple Cayenne stacks (or
>>> create
>>> multiple EventManagers as a side effect of some other action).
>>>
>>> A few ways to detect this from a thread dump:
>>>
>>> 1. Check the names of the cayenne-edt- threads. Specifically check
>>> the last
>>> part of the name - the number after the last dash. E.g. in
>>> "cayenne-edt-16165743-0", this number is "0". If this number is
>>> non-repeating and increments indefinitely, that may indicate a
>>> leak, however
>>> if this number is something rather low (e.g. between 0 and 4) and
>>> repeats
>>> many times, this means that multiple EventManagers are started
>>> somehow.
>>>
>>> 2. Check the address of the lock of the even thread pool:
>>>
>>>
>>> org.apache.cayenne.event.EventManager
>>> $DispatchThread.run(EventManager.java:476)
>>>> - locked <0x96d28c40> (a
>>>> java.util.Collections$SynchronizedList)
>>>>
>>>
>>> (it is "0x96d28c40" in the example above). If it is the same for all
>>> threads, then it is a leak. If it is different for most threads,
>>> then you
>>> have multiple EM's.
>>>
>>> Andrus
>>>
>>>
>>> On Apr 30, 2010, at 1:07 AM, Victor Leung wrote:
>>>
>>> Hi all,
>>>>
>>>> We are using Cayenne 3.0RC3. Our application has a web tier and a
>>>> business
>>>> /
>>>> data access tier (both deployed on Glassfish v2.1.1). Communication
>>>> between
>>>> the two tiers is through the use of Cayenne Web Service.
>>>>
>>>> We have been encountering OutOfMemoryException after a couple of
>>>> days of
>>>> routine usage. Heap dumps show a large number of daemon threads
>>>> associated
>>>> with EventManager. As an example, there are some 13,000 entries
>>>> in the
>>>> heap
>>>> dump similar to this:
>>>>
>>>> "cayenne-edt-16165743-0" daemon prio=10 tid=0x576d3800
>>>> nid=0x5b2a in Object.wait() [0x847ad000]
>>>> java.lang.Thread.State: TIMED_WAITING (on object monitor)
>>>> at java.lang.Object.wait(Native Method)
>>>> at
>>>>
>>>> org.apache.cayenne.event.EventManager
>>>> $DispatchThread.run(EventManager.java:476)
>>>> - locked <0x96d28c40> (a
>>>> java.util.Collections$SynchronizedList)
>>>>
>>>> There are no evidence of any deadlocked threads.
>>>>
>>>> We have since turned on monitoring on the JVM. It appears that
>>>> the number
>>>> of
>>>> daemon threads stays more-or-less constant whilst we are just
>>>> reading from
>>>> the DB, but will creep up slowly over time whenever we do any
>>>> sort of DB
>>>> updates.
>>>>
>>>> Any hints as to how we can get around this problem would be much
>>>> appreciated!
>>>>
>>>> Thanks in advance,
>>>> Vic
>>>>
>>>
>>>
>
>
This archive was generated by hypermail 2.0.0 : Sat May 01 2010 - 01:12:41 EDT