I also have a very large table in SQL Server (2008 R2 Developer Edition) that has some performance issues.
I was wondering if another DBMS would work better with large tables. I mainly consider only the following systems: SQL Server 2008, MySQL, and PostgreSQL 9.0.
Or, as the above mentioned question is mentioned, is table size and performance mainly a factor of indexes and caching?
Also, will normalization improve performance or prevent it?
Edit:
One of the comments below is vague. I have over 20 million rows (20 years of stock data and 2 years of optional data), and I'm trying to figure out how to increase productivity by an order of magnitude. I only care about read / compute performance; I don't care about recording performance. The only entries are during data updates, and this is BulkCopy.
I already have some indexes, but I hope I'm doing something wrong because I need to speed things up quickly. I also need to look at my queries.
The comments and answers provided have already helped me understand how to start profiling my database. I am a programmer, not a database administrator (so the recommendation on Marco’s book is perfect ). I don't have much experience with databases, and I've never done database profiling before. I will try these suggestions and send a report if necessary. Thanks!
Johnb source share