Andrei,
I renamed net.ash to org.objectstyle.ashwood in the sourceforge ashwood cvs
repository and RSC marc repository (along with imports in Cayenne, my
version)
> 1. Insert/Delete order of dependent objects,
> 2. Insert/Delete order including reflexive relationships.
> 3. Delete/Update order for delete rules ("cascade", "nullify", "deny").
> BTW, does "deny" throw an exception when triggered?
> 4. Handling inserts/deletes of the records in the join tables for
flattened
> relationships.
These are in the latest CVS version, aren't they? I am going to have a look
and see how it can be fitted into my commit. The question is are we going to
stick up to ContextCommit or you plan to remodel the whole thing?
> I understand that Craig introduced special ordering in DataContext for
> cases 2..4 simply because our original implementation of (1) wasn't
generic
> enough (see Craig's comments in CVS DataContext, line 763)? I still
believe
> (but have no proof) that with the right implementation of sorting
> (Ashwood?), 2..4 become subsets of (1). But looks like in addition to
> working with entity dependency graph, we need an extra object dependency
graph.
You are quite right. In some cases there are object graphs needed in
addition to entity graphs. The prudent thing to do is to employ them only
for reflexive tables and cycles (see also
http://objectstyle.org/cayenne/lists/cayenne-devel/2003/01/0023.html) and
handle the more regular cases via entity sorting.
> Another thing to consider is the current Cayenne design:
>
> a. DataContext shouldn't do routing to nodes (DataDomain will)
No principal objections on my part as long as the context is notified upon
commit to each node and there is no functionality or consistency loss.
> b. DataDomain and DataNode should know nothing about DataObjects, and
> especially relatonships between them. This is the reason I am passing
query
> objects from DataContext down.
I do agree and my code satisfies this condition - a data node processes
batches equivalent to insert, update, delete queries. Even BatchInterpreter
knows only about a list of DbAttributes (one has to get correct data types
somewhere to run PreparedStatements, anyway it is the straight and simple db
metadata in fact)
> c. Sort policies (on/off) should be set by node based on DbAdapter (if the
> database doesn't care about constraints, we shouldn't too).
It is true! I am trying to go in this direction - see
DataNode.resetReferentialIntegritySupport() for example. Of course there are
many possible approaches. Maybe one prefers to do it different way.
>
> Given (c), we shouldn't ideally sort in the DataContext at all.
As long as you choose to be unaware of DataObjects in DataDomain and
DataNode it is the only place to sort the stuff. Here only we have all the
relevant information. To sort objects for primary key initialization we need
all the data maps in the domain because principally one can have a
toDependentKey relationship between entities defined for different nodes
(why not - it is a logical dependency telling us where to take a primary key
value for an object). When you sort data for insert or delete operations you
also need the description of the referential dependencies at the node level,
so again you end up with DataObjects and their Entities. You could riposte
with the suggestion to create queries for every object with some kind of
hints to separate formally separate data objects and sorting procedure. But
I truly believe it to be a riposte in tertio. First, these hints would be
assigned by the context (where else, indeed?) which would be the actual
sorting, and what would be the point then to just transfer
Collections.sort() to a different location? Second, if there are already 10K
new and modified objects in the context does one really want to create 10K
more query objects and bury the last traces of performance and memory under
their weight only to separate one completely internal part from another just
as much internal one? And this is a very real situation: take 100 tables and
make like 100 inserts+updates+deletes per each one, and you have it while
for almost any database to process them all at once is a piece of cake
Cayenne will struggle to devour such numbers. The testing app monitors
memory usage, so I could see it for myself (oh, and don't forget additional
caching of the data by JDBC drivers in the batch mode).
> At the same
> (For the new "delete" rules explanation go here:)
> http://objectstyle.org/cayenne/lists/cayenne-devel/2002/12/0055.html
>
>
> DbRelationship *is* a foreign key. Even if the corresponding constraint is
> not defined, it is a "logical" FK. I guess at some point we can start
> making this distinction, but for now I suggest ignoring it. If the
database
> does not support FK (MySQL), then DbAdapter allows globally switching
> ordering off.
>
Alright, I believe this distinction to be very important but if we cannot go
in for it now let's give it up for a while. I can easily modify my code to
work with data maps instead of JDBC metadata. In this case only those
DbRelationships are going to be taken into account that are defined as "to
one" and having primary key as the destination.
Andriy.
This archive was generated by hypermail 2.0.0 : Mon Jan 27 2003 - 12:00:38 EST