Re: raw dataport

From: Bryan Lewis (brya..aine.rr.com)
Date: Wed Jun 14 2006 - 08:15:24 EDT

  • Next message: Tore Halset: "Re: raw dataport"

    Nice. We've been using dataport to slurp our database from oracle to
    postgres with no trouble, but we have only a few blob columns and they
    don't get that big.

    I would've thought that the commit-based-on-number-of-bytes would've
    been a sufficient fix. Was it necessary to use jdbc? Maybe that was how
    you got the byte count.

    Tore Halset wrote:

    > Hello.
    >
    > Anyone got dataport to work on huge databases with lots of rows and
    > lots of blobs/clobs? I had problems porting over one of our databases
    > yesterday. One of the tables has ~12M rows with clobs. Even though
    > INSERT_BATCH_SIZE are 1000, it would just go on forever without
    > committing the first 1000 rows. It would also gladly throw away
    > OutOfMemoryExceptions..
    >
    > I ended up writing a new DataPort.processInsert that use the model to
    > create plain jdbc sql statements. I also changed the partially commit
    > algorithm to commit based on the number of bytes read/written since
    > the previous commit instead of the number of rows.
    >
    > After the change, DataPort would port anything without problems :)
    > The 17GB MS SQL Database got over to PostgreSQL on my old PowerBook
    > in a few hours without any memory problems.
    >
    > So, what do you think? Am I using the current DataPort incorrectly?
    > Should this feature replace the current dataport, be enabled with a
    > raw-flag, or perhaps be availiable as a new ant task? It is at least
    > useful for me :) After 1.2 of course.
    >
    > Regards,
    > - Tore.
    >



    This archive was generated by hypermail 2.0.0 : Wed Jun 14 2006 - 08:15:54 EDT