I'm beginning to think this is tied into the possible bug I reported on
devel the other day. Our schema has something that looks like this:
PSCollectionElement <->> PSItem <<-> FENotification
PSItem is used as a joining record. If you chop out a PSItem row, it
drops out of one of our queues. When I'm trying to delete, I do this:
collectionElement.removeFromItems(item);
notification.removeFromItems(item);
dataContext.deleteObject(item);
...
dataContext.commitChanges();
And then it goes boom. I backtraced and dumped
dataContext.modifiedObects() and it thought the collectionElement and
notification objects were both modified (even though they really
weren't). Also, I have no delete rules set up. All that should've
happened (I think) is the item gets removed from both sides of the
relationships and then deleted from the PSItem table. Or am I doing
something totally boneheaded (a distinct possibility).
Thanks,
/dev/mrg
PS. dataContext.deletedObjects() shows the correct item to delete.
-----Original Message-----
From: Mike Kienenberger [mailto:mkienen..laska.net]
Sent: Wednesday, March 02, 2005 4:23 PM
To: cayenne-use..bjectstyle.org
Subject: Re: appendOptimisticLockingAttributes map snapshot
errorduringinsert/update optimistic locking delete [Was: Re: Does this
lookfamiliar?]
"Michael Gentry (Yes, I'm a Contractor)" <michael_gentr..anniemae.com>
wrote:
> It is an insert, followed by a DELETE, in the same DataContext. Here
is
the
> relevant stack dump (Cayenne 1.1):
If it's 1.1, that's before my changes with delete and optimistic
locking.
>
org.objectstyle.cayenne.access.ContextCommit.appendOptimisticLockingAttr
ibut
> es(ContextCommit.java:564)
>
org.objectstyle.cayenne.access.ContextCommit.prepareUpdateQueries(Contex
tCom
> mit.java:426)
The line above says it's happening for an update query. Can you figure
out
what's in that update query -- it might shed more light.
Perhaps you have a leftover update operation sitting around in a context
that either didn't get committed or didn't get rolled back after a
problem.
One of the problems I recently hit was that these non-WebObject servlets
happily process multiple requests by the same session at the same time
(no
forced serialization). Thus, I had a user submit a request that
created a
user account, then got impatient (because it was delaying on sending an
email with an activation code), and hit submit again. I was using
BasicServletConfiguration to get the DataContext bound to the session,
and
that second request started running while the first action was still in
progress using the same DataContext (because it was the same session).
A
second copy of the user account was added to the datacontext, then it
committed both copies, leaving an empty context when the first session
finally got around to committing it.
Since then, I've forced serialization of all requests by the same
session.
I also now explicitly check if my DataContext is dirty as my code should
never leave a DataContext in a dirty state between requests.
if (dataContext.hasChanges())
{
String dirtyDataContextString = "Dirty DataContext found:
\n";
Iterator it = dataContext.deletedObjects().iterator();
if (it.hasNext()) dirtyDataContextString += " Deleted
Objects:\n";
while (it.hasNext()) {
DataObject object = (DataObject) it.next();
dirtyDataContextString += " " + object + "\n";
}
it = dataContext.modifiedObjects().iterator();
if (it.hasNext()) dirtyDataContextString += " Modified
Objects:\n";
while (it.hasNext()) {
DataObject object = (DataObject) it.next();
dirtyDataContextString += " " + object + "\n";
}
it = dataContext.newObjects().iterator();
if (it.hasNext()) dirtyDataContextString += " New
Objects:\n";
while (it.hasNext()) {
DataObject object = (DataObject) it.next();
dirtyDataContextString += " " + object + "\n";
}
NotificationManager.getInstance().reportProgrammingError(identityString,
new
RuntimeException(dirtyDataContextString));
// can't dump changes because another thread might be using
it
// dataContext.rollbackChanges();
// create a temporary new DataContext and hope the problem
goes
away next time.
dataContext = DataContext.createDataContext();
}
>
org.objectstyle.cayenne.access.ContextCommit.commit(ContextCommit.java:1
56)
>
org.objectstyle.cayenne.access.DataContext.commitChanges(DataContext.jav
a:12
> 66)
>
org.objectstyle.cayenne.access.DataContext.commitChanges(DataContext.jav
a:12
> 36)
> qm.QueueDetails.processFormSubmission(QueueDetails.java:386)
> ...
> (everything else is Tapestry cruft)
>
>
> dataObject.getObjectId() returns:
>
> (org.objectstyle.cayenne.ObjectId) FENotification: <identifier:
[..0ab17>
>
> But
>
dataObject.getDataContext().getObjectStore().getRetainedSnapshot(dataObj
ect.
> getObjectId()) returns:
>
> null
>
>
> So, insert object, then delete it (in same DC) and then boom. Null
pointer
> exception. The DC is stored in a Tapestry Visit. The delete is done
in a
> different page, but immediately after the insert. And it doesn't
happen
> every time. :-)
>
> /dev/mrg
>
>
>
> > -----Original Message-----
> > From: Mike Kienenberger [mailto:mkienen..laska.net]
> > Sent: Tuesday, March 01, 2005 6:29 PM
> > To: cayenne-use..bjectstyle.org
> > Cc: Gentry, Michael (Contractor)
> > Subject: appendOptimisticLockingAttributes map snapshot error
> > duringinsert/update optimistic locking delete [Was: Re: Does this
look
> > familiar?]
> >
> >
> > "Michael Gentry (Yes, I'm a Contractor)"
<michael_gentr..anniemae.com>
> > wrote:
> >> /**
> >> * Appends values used for optimistic locking to a given
snapshot.
> >> */
> >> private void appendOptimisticLockingAttributes(
> >> Map qualifierSnapshot,
> >> DataObject dataObject,
> >> List qualifierAttributes) throws CayenneException {
> >>
> >> Map snapshot =
> >> dataObject.getDataContext().getObjectStore().getRetainedSnapshot(
> >> dataObject.getObjectId());
> >>
> >> Iterator it = qualifierAttributes.iterator();
> >> while (it.hasNext()) {
> >> DbAttribute attribute = (DbAttribute) it.next();
> >> String name = attribute.getName();
> >> if (!qualifierSnapshot.containsKey(name)) {
> >> qualifierSnapshot.put(name, snapshot.get(name));
> >> }
> >> }
> >> }
> >>
> >>
> >> For some reason, I thought you had added the optimistic locking
stuff
> > to
> >> Cayenne.
> >
> > Yeah, I provided the original patches, although Andrus did a lot of
> > cleanup
> > and refactoring work before committing them.
> >
> >
> >> Every now and then I get an exception on the "Map snapshot = ..."
> >> line. It *usually* happens when I do an insert and then go do an
> > update.
> > I
> >> haven't quite nailed down exactly what is going on, but thought if
you
> > had
> >> written that part, you could give me a brief overview of what it's
> > trying
> > to
> >> do and then I could try to debug better.
> >
> > I didn't do this particular part, but Andrus and I just talked about
it
> > as I
> > hit an issue with it when adding optimistic locking on deletes.
> >
> > What this is supposed to do is collect the old values of each object
to
> > optimistically-update and bind them to the batch update query.
> > I *think* but am not completely sure that getRetainedSnapshot() is a
> > cache
> > of these values explicitly saved for this purpose and is different
from
> > getCachedSnaphot (which is performance-related and isn't guranteed
to
> > exist).
> >
> > The actual exception generated would be helpful.
> >
> > If I had to debug a situation like this, I'd put in some code to
output
> > a
> > stack trace every time the retained snapshot for your object
changes.
> >
> > Andrus may have better ideas or more accuration information on this.
He
> > was
> > about to get assigned the issue of retaining snapshot info for
> > optimistic
> > deletes anyway :) I just hadn't finished creating a test case yet.
> >
> > -Mike
>
This archive was generated by hypermail 2.0.0 : Wed Mar 02 2005 - 17:57:15 EST