OK, first at first: LIMIT hides a lot of bad queries, until someone adds ORDER BY. LIMIT at the invitation to the DB engine will start the query as soon as the specified number of records are created, but as soon as ORDER BY is added, ALL records are created inside, but hidden from the programmer - if the LIMIT'd query slows down ORDER BY very much, it was not a very good query to start.
However, there are a few small changes that you can make to your query (and database settings) to improve the situation. From looking at the EXPLAIN plan (you are in the top 10%, including this), a lot of things stand out - in the result set, 240,000 entries are collected. From "Using Filesort" it seems like a 2-way sorting scene is happening, plus the query creates a temporary table - I would look at an increase in your sort_buffer_size
, but be careful to make it too large, as I seem to recall that it is not a global buffer, so don't make it 256 MB if you have 100 simultaneous sessions - I would suggest that 4 MB or 8 MB might be good starting points.
If this does not improve the situation, I would start working on the query itself: the EXPLAIN output tells us that the lcase_influence
index has 300 + bytes of keys - if you move the influence string to a separate tblInfluence,
and just include tblInfluence.id
in the tbluserinfluences
table and index it, then you will both take off the size of the tbluserinfluences
and index.infame tables.
If this does not fix the problem, I would look at moving the sort so that it only sorts the minimum fields, and not the entire output record. I also joined tblUsrContent
directly with tbluserinfluences
- I suspect that would not have much effect, but if this were my code, I would prefer one-step joins for long join chains where possible.
source share