Re: Duplicate Key Problem

From: Andrus Adamchik (andru..bjectstyle.org)
Date: Thu Jul 13 2006 - 18:15:01 EDT

  • Next message: Michael Gentry: "Re: Re: Duplicate Key Problem"

    Our current more deterministic strategy worked ok, (I guess partly
    because most database adapters are sequence-based, and don't have to
    deal with locking). But I am certainly not a fan of pessimistic
    locking and considered a strategy similar to what Craig outlined some
    time ago as well.

    I guess we can make it an option for MySQLAdapter (the only one that
    uses explicit locking) and use it with a hard limit on a number of
    conflicts.

    Andrus

    On Jul 13, 2006, at 4:54 PM, Craig L Russell wrote:
    > Hi,
    >
    > This might be a really random suggestion, but couldn't you use the
    > following strategy, even with autocommit=true?
    >
    > public long getNextKey() {
    > boolean conflict = true;
    > while (conflict) {
    > key = SELECT current_key_value from auto_pk_table
    > next_key = key + increment;
    > UPDATE auto_pk_table SET current_key_value = next_key_value WHERE
    > current_key_value = key
    > conflict = 0 rows updated
    > }
    > return next_key;
    > }
    >
    > All you worry about here is making sure that your increment is
    > large enough to avoid frequent conflicts.
    >
    > Craig
    >
    > On Jul 13, 2006, at 6:17 AM, Gentry, Michael (Contractor) wrote:
    >
    >> The basic strategy to refresh the primary key cache in MySQL is:
    >>
    >> * lock table
    >> * select next key values from auto_pk_support
    >> * update auto_pk_support with a new next key (20 keys by default)
    >> * unlock table
    >>
    >> I was just looking at the MySQL docs for "LOCK TABLES" and it
    >> appears to
    >> work differently with InnoDB. Which DB storage type are you using?
    >> Also, Andrus' suggestion that there could've been a failure with the
    >> connection going down could be correct. It would've happened between
    >> select and update above. An application got 20 PKs, but the
    >> connection
    >> was broken before it could update the PK table. Another application
    >> comes along and gets the same 20 PKs.
    >>
    >> Any chance you can use PostgreSQL? It uses sequences which are
    >> atomic
    >> operations.
    >>
    >> Thanks,
    >>
    >> /dev/mrg
    >>
    >> PS. Andrus: If a connection is broken and then Cayenne auto-
    >> reconnects,
    >> does it discard the PK cache? (I'm guessing not, which could explain
    >> this.)
    >>
    >>
    >> -----Original Message-----
    >> From: Christian Mittendorf [mailto:christian.mittendor..reenet.de]
    >> Sent: Thursday, July 13, 2006 6:23 AM
    >> To: cayenne-use..ncubator.apache.org
    >> Subject: Re: Duplicate Key Problem
    >>
    >>
    >> I haven't found any hint to something abnormal in my log files yet.
    >> However, we experienced the duplicate key problem some minutes ago
    >> and I was able to track down the problem a bit.
    >>
    >> Affected is only one single WebApp and these "Duplicat entry" errors
    >> occured since yesterday, about 15:30. If I search for this exception
    >> I can find this ID list:
    >>
    >> $ cat x.log.2006-07-12 x.log | grep 'message from server'|uniq
    >> Caused by: java.sql.SQLException: null, message from server:
    >> "Duplicate entry '17048820' for key 1"
    >> Caused by: java.sql.SQLException: null, message from server:
    >> "Duplicate entry '17048821' for key 1"
    >> Caused by: java.sql.SQLException: null, message from server:
    >> "Duplicate entry '17048822' for key 1"
    >> Caused by: java.sql.SQLException: null, message from server:
    >> "Duplicate entry '17048823' for key 1"
    >> Caused by: java.sql.SQLException: null, message from server:
    >> "Duplicate entry '17048824' for key 1"
    >> Caused by: java.sql.SQLException: null, message from server:
    >> "Duplicate entry '17048825' for key 1"
    >> Caused by: java.sql.SQLException: null, message from server:
    >> "Duplicate entry '17048826' for key 1"
    >> Caused by: java.sql.SQLException: null, message from server:
    >> "Duplicate entry '17048827' for key 1"
    >> Caused by: java.sql.SQLException: null, message from server:
    >> "Duplicate entry '17048828' for key 1"
    >> Caused by: java.sql.SQLException: null, message from server:
    >> "Duplicate entry '17048829' for key 1"
    >> Caused by: java.sql.SQLException: null, message from server:
    >> "Duplicate entry '17048830' for key 1"
    >> Caused by: java.sql.SQLException: null, message from server:
    >> "Duplicate entry '17048831' for key 1"
    >> Caused by: java.sql.SQLException: null, message from server:
    >> "Duplicate entry '17048832' for key 1"
    >> Caused by: java.sql.SQLException: null, message from server:
    >> "Duplicate entry '17048833' for key 1"
    >> Caused by: java.sql.SQLException: null, message from server:
    >> "Duplicate entry '17048834' for key 1"
    >> Caused by: java.sql.SQLException: null, message from server:
    >> "Duplicate entry '17048835' for key 1"
    >> Caused by: java.sql.SQLException: null, message from server:
    >> "Duplicate entry '17048836' for key 1"
    >> Caused by: java.sql.SQLException: null, message from server:
    >> "Duplicate entry '17048837' for key 1"
    >> Caused by: java.sql.SQLException: null, message from server:
    >> "Duplicate entry '17048836' for key 1"
    >> Caused by: java.sql.SQLException: null, message from server:
    >> "Duplicate entry '17048838' for key 1"
    >> Caused by: java.sql.SQLException: null, message from server:
    >> "Duplicate entry '17048839' for key 1"
    >>
    >> It seems as if one package of PKs (17048820 - 17048839 ) was double
    >> used.
    >> How does the PK magic of Cayenne work? Does each DataContext get its
    >> own set of IDs?
    >>
    >>
    >> Christian
    >>
    >>
    >>
    >> Am 12.07.2006 um 23:40 schrieb Andrus Adamchik:
    >>
    >>> Hmm.. MySQLPkGenerator runs "UNLOCK" in the finally block. Of
    >>> course if the connection goes down or something else equally bad
    >>> happens during unlock, then you can end up locking the table. I'll
    >>> be curious to know what exactly happened, as we may improve the
    >>> unlock reliability as a result.
    >>>
    >>> Andrus
    >>>
    >>>
    >>> On Jul 12, 2006, at 5:29 PM, Christian Mittendorf wrote:
    >>>
    >>>> Yes, all of our applications on this database are cayenne based.
    >>>> I will have a look at the logs tomorrow and see if I can find some
    >>>> other exceptions...
    >>>>
    >>>> Thanks,
    >>>> Christian
    >>>>
    >>>> Am 12.07.2006 um 19:56 schrieb Gentry, Michael ((Contractor)):
    >>>>
    >>>>> These could be related, then. I know the MySQL adapter has to
    >>>>> lock the
    >>>>> auto_pk_support table to generate keys and perhaps something
    >>>>> happened to
    >>>>> leave the table locked. Did you see any other exceptions?
    >>>>>
    >>>>> Also, you said you have multiple applications hitting the same
    >>>>> database.
    >>>>> Are all of these applications Cayenne-based?
    >>>>>
    >>>>> Thanks,
    >>>>>
    >>>>> /dev/mrg
    >>>>>
    >>>>>
    >>>>> -----Original Message-----
    >>>>> From: Christian Mittendorf
    >>>>> [mailto:christian.mittendor..reenet.de]
    >>>>> Sent: Wednesday, July 12, 2006 1:50 PM
    >>>>> To: cayenne-use..ncubator.apache.org
    >>>>> Subject: Re: Duplicate Key Problem
    >>>>>
    >>>>>
    >>>>> We don't use the autoincrement from MySQL.
    >>>>>
    >>>>> Am 12.07.2006 um 16:35 schrieb Gentry, Michael ((Contractor)):
    >>>>>
    >>>>>> Are you using MySQL's autoincrement feature for the PKs or the
    >>>>>> auto_pk_support table from Cayenne?
    >>>>>>
    >>>>>> -----Original Message-----
    >>>>>> From: Christian Mittendorf
    >>>>>> [mailto:christian.mittendor..reenet.de]
    >>>>>> Sent: Wednesday, July 12, 2006 4:29 AM
    >>>>>> To: cayenne-use..ncubator.apache.org
    >>>>>> Subject: Duplicate Key Problem
    >>>>>>
    >>>>>>
    >>>>>> Hello!
    >>>>>>
    >>>>>> We are running a MySQL 5.0 server using InnoDB tables for our
    >>>>>> applications. We have multiple web applications accessing this
    >>>>>> same
    >>>>>> database and the system is running smooth and without problems
    >>>>>> (almost).
    >>>>>>
    >>>>>> But yesterday we were experiencing some strange errors. During
    >>>>>> the
    >>>>>> afternoon some CayenneRuntimeExceptions appeared in the log file,
    >>>>>> which were caused by:
    >>>>>>
    >>>>>> Caused by: java.sql.SQLException: null, message from server:
    >>>>>> "Duplicate entry '4353880' for key 1"
    >>>>>>
    >>>>>> A bit later other CayenneRuntimeException appeared. For those
    >>>>>> exceptions the cause was:
    >>>>>>
    >>>>>> Caused by: java.sql.SQLException: Deadlock found when trying
    >>>>>> to get
    >>>>>> lock; Try restarting transaction, message from server: "Lock
    >>>>>> wait
    >>>>>> timeout exceeded; try restarting transaction"
    >>>>>>
    >>>>>> I'm now wondering, are both errors related to each other?
    >>>>>>
    >>>>>> There are, from my point of view, two possible causes for the
    >>>>>> "Duplicate entry..." message:
    >>>>>>
    >>>>>> - two Insert statements on the same object, which might be
    >>>>>> possible
    >>>>>> if the application is clustered, which our applications are
    >>>>>> not, or
    >>>>>> - dual use of the same key in different objects
    >>>>>>
    >>>>>> Has anybody else experienced something like that? Are there any
    >>>>>> hints
    >>>>>> what I might do to avoid such situations?
    >>>>>>
    >>>>>> Christian
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>
    >>>>
    >>>>
    >>>
    >>
    >
    > Craig Russell
    > Architect, Sun Java Enterprise System http://java.sun.com/products/jdo
    > 408 276-5638 mailto:Craig.Russel..un.com
    > P.S. A good JDO? O, Gasp!
    >



    This archive was generated by hypermail 2.0.0 : Thu Jul 13 2006 - 18:15:26 EDT