Is it a good idea to index each column if users can filter any column.

In my application, users can create their own tables with three types of columns. Text, Numeric, and Date can contain up to 20 columns. I am creating an SQL table based on their schema using nvarchar (430) for text, decimal (38.6) for numeric and datetime. Along with the identifier identity column.

It is likely that many of these tables will be created by different users, and data can often be updated by users uploading new CSV files. To get maximum performance when loading user data, we crop the table to get rid of the existing data, and then run the BULK INSERT batches.

The user can make a choice based on the filter he creates, which can contain any number of columns. My problem is that some tables with a lot of rows will have poor performance during this selection. To combat this, I was thinking of adding indexes, but since we do not know which columns will be included in the WHERE clause, we will need to index each column.

For example, on a local SQL server, one table with a little more than a million rows and a WHERE clause on its 6 columns will take about 8 seconds at the first start, then within one second for subsequent runs. With indices in each column, it starts within one second the first time the query is run. This performance issue is exacerbated when we test the Azure SQL database, where the same query takes a minute on first launch and does not improve on subsequent launches, but with indexes it takes 1 second.

So, it would be a suitable solution to add an index to each column when the user creates the column or is there a better solution.

+4
source share
1 answer

, , . , , , , .

" ", , , , , sql (, Apache Lucene ), , "" .

0

All Articles