What could lead to poor SQL server performance?

Every time I find out that the performance of retrieving data from my database is slow. I am trying to figure out what part of my SQL query has a problem, and I am trying to optimize it, and also add some indexes to the table. But this does not always solve the problem.

My question is:

Are there any other tricks to improve SQL server performance?

What other reason could affect SQL Server performance?

+7
performance sql-server
source share
5 answers
  • Ineffective request design
  • Files for automatic growing
  • Too many indexes to save in the table
  • Too few indexes in a table
  • Wrong Clustered Index Selection
  • Index fragmentation due to poor maintenance
  • heap fragmentation due to lack of a clustered index
  • FILLFACTOR used too high for indexes, causing excessive page breaks
  • The FILLFACTOR level used for indexes is too low, resulting in overuse of space and increased scan time.
  • Do not use covered indexes where necessary
  • Non-selective indexes are used.
  • Incorrect statistics (outdated statistics)
  • Databases are not normalized properly
  • Transaction logs and data sharing the same master spindles
  • Incorrect memory configuration.
  • Too little memory
  • Processor too small
  • Slow Hard Drives
  • Hard drive or other hardware failure
  • A 3D screensaver on a database server chewing on your processor.
  • Sharing a database server with other processes that compete for processor and memory
  • Blocking conflict between requests
  • Queries that scan entire large tables
  • End code that searches data inefficiently (nested loops, line by line)
  • CURSORS that are not needed and / or are not FAST_FORWARD
  • Do not set NOCOUNT when you execute large tables.
  • Transaction isolation level too high (e.g. using SERIALIZABLE if not necessary)
  • Too many rounds between the client and SQL Server (chat interface)
  • Optional linked server request
  • A linked server request that targets a table on a remote server without specifying a primary or potential key
  • Too much data to choose
  • Reordering Queries

oh, and there may be others.

+20
source share

When I communicate with new developers who have this problem, I usually find that it is because of one of two problems. Both of them are fixed if you follow these two rules.

First, do not retrieve data that you do not need. For example, if you are swapping, do not return 100 rows, and then calculate which ones belong to this page. Have a stored procedure to figure it out and get only 10 that you need.

Secondly, nothing happens faster than work that you do not. For example, I worked on a system in which full roles and rights for the user were obtained with each page requested - for some users, this was 100 lines. Even just saving this state of the session at the first request, and then using it from there for subsequent requests, required a significant database weight.

+2
source share

If you are new to the database and have access to the Database Engine Tuning Advisor, you can heuristically configure your database.

Basically, you execute SQL queries that run from your database in SQL Profiler, and then send them to DETA. DETA effectively launches queries (without changing your data), and then finds out what information is missing from your database (views, indexes, sections, statistics, etc.) to make queries better.

Then he can apply them for you and track them in the future. I'm not saying that assuming DETA is always right, or doing something without understanding, but I found that it is definitely a good way to see what your queries do, how much time they take and how you can index the DB accordingly.

PS: Based on the foregoing, it is much better to invest in a good database administrator at the beginning of the project so that you have good structures and indexing to begin with. But this is not the position in which you are now ...

0
source share

Suggest you get a good performance tuning book for your database (this is very specific to a specific database). This is an extremely complex question and cannot be answered otherwise than in general terms on the Internet.

For example, a Dave marker tells you that inefficient queries can cause a problem, and there are many ways to write inefficient queries and many other ways to fix them.

0
source share

This is a very broad question. And there is already a ton of answers. However, I would like to add one important factor - Page Split . The problem is that there are good splits and bad splits. The following are good articles explaining how to use the extended transaction_log event to detect bad / nasty page breaks

You mentioned:

I am trying to optimize it as well as add some indexes

But sometimes deleting unused non-clustered indexes can help improve performance, as it helps reduce transaction logs. Read Top Causes of Log Performance Issues

Wait for the statistics or let me know where it hurts gives an idea of โ€‹โ€‹the use of wait statistics for performance analysis.

For fresh ideas on performance, take a look at Performance Recommendations - sqlmag.com

  • Separate tables in connections with different disks (for parallel disk I / O operations - file groups).
  • Avoid joining columns with multiple unique values.

To understand JOIN , read JOIN Advanced Methods

0
source share

All Articles