Our SQL SERVER 2008 R2 database has a COUNTRIES reference table containing countries. PRIMARY KEY is an nvarchar column:
create table COUNTRIES( COUNTRY_ID nvarchar(50) PRIMARY KEY, ... other columns )
The primary key contains values ββsuch as "FR", "GER", "US", "UK", etc. This table contains max. 20 lines.
We also have a SALES table containing sales data:
create table SALES( ID int PRIMARY KEY COUNTRY_ID nvarchar(50), PRODUCT_ID int, DATE datetime, UNITS decimal(18,2) ... other columns )
This sales table contains a column named COUNTRY_ID , also of type nvarchar (and not the primary key). This table is much larger, containing about 20 million rows.
In our application, when querying the SALES table, we filter almost every time by COUNTRY_ID . Even so, it takes too much time to complete most aggregation requests (even with the appropriate indexes)
We are under development to improve query performance in the SALES table. My question is:
Should I switch type COUNTRY_ID from nvarchar(50) to type int ? If the COUNTRY_ID column is converted to an int type in both tables, can I expect better performance when connecting two tables?
source share