How much performance is improved when using LIMIT in an SQL statement?

Suppose I have a table in my database with records 1.000.000.

If I do:

SELECT * FROM [Table] LIMIT 1000

Will this query take the same time as if I had a table with records 1000and just do:

SELECT * FROM [Table]

?

I'm not looking if it takes exactly the same time. I just want to know if the first one will take much longer than the second.

I said the 1.000.000notes, but it could be 20.000.000. It was just an example.

Edit:
Of course, when using LIMIT and without using it in the same table, a query built using LIMIT should execute faster, but I do not ask that ...

To make it shared:

Table1: X
Table2: Y records

(X << Y)

:

SELECT * FROM Table1

SELECT * FROM Table2 LIMIT X

2:
:

5 . ( 100% ) 5.000.000 . SQL Server CE 3.5, Entity Framework ORM LINQ to SQL .

, , ( ). , .

, - , ( X ) X (), , ...

, 5.000.000 , 1000 , , , 5.000.000.

+5
3

, , . , SELECT. SQL ORDER BY, , , . , .

X = Y, , , , - , - SELECT. .

Y > X , .

, Y → X ( Y , X), LIMIT MAY . - - , , . , , , .

, 1000 1 10 , . 1000000 3-4 10000 . 10 10000 - 3-4 , .

, , , .

+1

TAKE 1000 1000000 - 1000000/1000 (= 1000) , ( ) 1000/1000000 . , , , .

() , - , . , , :

  • ORDER BY - - .
  • ORDER BY - - , TAKE,
    • ORDER BY
    • (TAKE count)
    • , IO/,

1000 1000 , ( ), TAKE 1000 1 , (1) (2)

+3

, , . . , - .

, - , .

.

, , . .. , , , , .

, , .

MySQL LIMIT OFFSET ( #), , .

It is not right to start thinking about circuit design and cleaning records until you take advantage of this and a bunch of other strategies. In this case, do not solve problems that you do not have. In fact, tables with several million rows are small if they are properly indexed.

0
source

All Articles