We have an outdated database, which is the db SQL server (2005 and 2008).
All primary keys in tables are UniqueIdentifiers.
The tables currently do not have a clustered index created on them, and we encounter performance problems in tables with only 750 thousand records. This is the first database that I have worked with unique identifiers as the only primary key, and I have never seen a sql server so slow in returning data.
I don't want to create a clustered index in a uniqueidentifier, because they are not sequential and therefore slow down applications when it comes to data insertion.
We cannot remove the unique identifier that is used for identity management of the remote site.
I thought about adding a large integer identifier column to the tables and creating a clustered index in that column and including a unique identifier column.
i.e.
int identity - the first column to maintain the insertion speed is a unique identifier - so that the application continues to work as expected.
The goal is to improve identity query and combine table performance.
Q1: Will this improve query performance in db or slow it down?
Q2: Is there an alternative to this that I have not listed?
Thanks Pete
Edit: Performance problems are a quick search of data using select statements, especially if several of the more “transactional / changing” tables are combined together.
Edit 2:. Connections between tables are usually made between the primary key and foreign keys; for tables with foreign keys, they are included in a non-clustered index to provide a better coverage index.
There are no other values ​​in the tables that would provide a good clustered index.
I am more inclined to add an additional identity column to each of the high-load tables, and then include the current Guid PK column in the clustered index to ensure the best query performance.
Edit 3: I would rate that 80% of requests are only for primary and foreign keys through the data access mechanism. As a rule, our data model has lazy loadable objects that execute a query when it is accessed; these queries use an object identifier and a PK column. We have a large number of user-managed data exclusion / inclusion requests that use foreign key columns as a filter based on criteria for type X to exclude the following identifiers. The remaining 20% ​​are those where the Enum clusters (int) or the columns of the date range, the system performs very few text queries.
If possible, I have already added coverage indexes to cover the most demanding queries, but so far I'm still disappointed in performance. Since bluefooted says the data is stored as a heap.