In terms of performance, the big gain in using individual files / filegroups is that it allows you to distribute your data across multiple physical disks. This is useful because with multiple drives, you can process multiple data requests at the same time (in parallel, usually faster than serial). All other things being equal, this will increase productivity, but the question is how much your particular data set and the queries you perform depend on it.
From your description, the slow operations that bother you create tables and check for tables. If you create 100 tables in one pass, then after 1000 starts, you have 100,000 tables. I do not have much experience creating such tables in a single database, but you can click on the limitations of system tables that track the database schema. In this case, you can see some benefit by extending your tables to multiple databases (these databases can still live on the same instance of SQL Server).
In general, the SQL Profiler tool is the best starting point for finding slow queries. There are data columns that indicate the cost of the CPU and IO of each batch of SQL, which should indicate the worst offenders. After you find the problems with the queries, I would use the query analyzer to create query plans for each of these queries and see if you can tell what makes them slow. Do this by opening a query window, enter your query and press Ctrl + L. A full discussion of what might be slow would fill the entire book, but good things to look for are table scans (very slow for large tables) and inefficient joins.
In the end, you can improve the situation simply by rewriting your queries, or you may have to make wider changes to the table layout. For example, maybe there is a way to create only one or several tables at a time, rather than 1000. More detailed information about your specific installation will help us give a more detailed answer.
I also recommend this site for many tips on how to make things faster:
http://www.sql-server-performance.com/
Charlie
source share