Fetch limit has been around since Cayenne 2.0, and its not working as
I expected.
Stepping through the code its performing the limit operation after the
query has been performed. For example a table with 100,000 rows will
be read into memory even with a fetch limit of 100. Then Cayenne
provides a wrapper around the iterator which returns only 100 records.
This behaviour really needs to be documented, however more to the
point this is not what I would expect from an ORM I would expect it to
use the database to set the limit.
For example:
// mysql
select col from tbl limit 20;
// Oracle
select col from tbl where rownum<=20;
// Microsoft SQL
select top 20 col from tbl;
We are going to have to revisit a bunch of code after figuring this out :(
regards Malcolm Edgar
On Wed, Jun 17, 2009 at 11:37 AM, Robert
Zeigler<robert.zeigle..oxanemy.com> wrote:
> I don't think the behavior changed, per se. Rather, setFetchLimit is a
> relatively new feature, and may not be properly supported by all of the db
> adaptors yet.
>
> Robert
>
> On Jun 16, 2009, at 6/167:28 PM , Malcolm Edgar wrote:
>
>> Hi Guys,
>>
>> On SQL Server we are finding that the setting the Fetch Limit on a
>> SelectQuery does not modify the SQL query, to set TOP or SET ROWCOUNT,
>> so the database is not limiting the number of rows returned, and it
>> appears that Cayenne is limiting the number of rows returned in
>> memory?
>>
>> This is killing our application with OOM errors. Did this behaviour
>> change? We are using Cayenne 3.0M5
>>
>> regards Malcolm Edgar
>
>
This archive was generated by hypermail 2.0.0 : Tue Jun 16 2009 - 22:06:44 EDT