I haven't tried this, but it looks like local-VM cache synching may
actually work, even if you don't have a shared stack!
As DataRowStore dispatches events using DataDomain name as event
"subject", you can send the events to peer caches... I think the only
caveat is that if remote events are disabled for DataDomain,
DataRowStore never registers to listen for events (even though it still
dispatches its own updates). So you can either
(1) manually add DataRowStore as a listener whenever a new DataContext
// there is code in "DataRowStore.processRemoteEvent" to ignore certain
// so this is something to tweak ... also need to make sure there is no
... or (2) you can register your own EventBridgeFactory in the modeler
and implement a "local" EventBridge, faking remote processing
(re-injecting the same event into the EventManager from the bridge,
thus channeling updates to specific caches [DataRowStore ignores all
events that are not coming from its own EventBridge]).
Not sure which option is simpler. The second seems more reliable.
On Nov 23, 2004, at 9:28 PM, Mike Kienenberger wrote:
>> Andrus Adamchik <andru..bjectstyle.org> wrote:
>>> Then how about turning off shared cache all together ("Use Shared
>>> checkbox for DataDomain). This will put Cayenne in a "1.0 mode" with
>>> DataContext having its own cache. This way new DataContexts will
>>> empty, and will be filled with fresh data as you run queries or
> I'm now using this strategy, but I'm concerned about one thing.
> Since each DataContext uses its own cache, what happens in one session
> longer is reflected in other sessions, correct?
> I'm probably going to start seeing optimistic locking failures when a
> long-running session tries to update data that a short-time session had
> already reloaded and modified....
This archive was generated by hypermail 2.0.0 : Tue Nov 23 2004 - 23:54:23 EST