Re: Reconciling DataContexts

From: Kevin Menard (kmenar..ervprise.com)
Date: Fri Oct 19 2007 - 15:58:09 EDT

  • Next message: Andrus Adamchik: "Re: Reconciling DataContexts"

    On 10/18/07 5:53 PM, "Kevin Menard" <kmenar..ervprise.com> wrote:

    >> That fact is *almost* transparent as relationships of the local
    >> object are expanded as needed. What it doesn't cover is modified
    >> objects in the graph. This is where we could use a method like JPA
    >> 'merge' that would traverse already inflated graph and clone all
    >> local changes for objects attached to a given object into the target
    >> context.
    >
    > I'll have to double check this. I recall that not being the case, but it
    > has been some time since I last tried.

    Having looked at it again, I was mistaken. But, there is still a problem, I
    just recalled it wrong. localObject() does not deal with transient
    properties at all. So, anything you might put into your subclass basically
    gets reset when you do a localObject() call. Also, the object store grows
    with each localObject() call, rather than with the number of properties.

    So, looking at the problem again, I see a few issues with localObject():

    1) Overly verbose syntax. Consider:

        a.setSomething(b);

        versus

        a.setSomething((BType) a.getObjectContext().localObject(b.getObjectId(),
    b));

        I think this can largely be addressed by adding a new case to
    willConnect(), though. If the contexts are different, look at the
    persistence state and call localObject() automatically if it makes sense to.
    This helps ease up on the transparency issues as well. Rather than using
    one method for new, unregistered objects (setXXX()) and another for
    committed, registered objects (localObject()), you can consolidate to just
    the setter.

    2) Loss of transient values.

        This could probably be addressed reflectively.

    3) Growth of object store.

        This is trickier. Ideally, if I call setSomething(a) with 50 different
    instances of a, the object store would only have the latest a, since that's
    the only one that's going to be committed. What you have instead is 50
    different instances. With caching, I don't think DB access is the issue,
    but you will have an unbounded memory issue.

        By hiding this in a setter, it may be possible to unregister the old
    object first, thereby limiting the growth.

    I spent a decent amount of time looking into that, so I think the analysis
    is fairly accurate. Feel free to correct if not though.

    -- 
    Kevin
    



    This archive was generated by hypermail 2.0.0 : Fri Oct 19 2007 - 15:59:04 EDT