Ahh ... performance tuning SQL Server et. et al. my favorite thing!
Can anyone suggest any ways we could try to follow?
From the information you provided, I would vertically split the data. The value supports one database (server A) for actual OLTP (CRUD transactions) and one for KPI (server B).
For replication, I would use transactional replication - with proper operation, latency will be no more than one second. I cannot come up with a practical scenario when this is inappropriate. Indeed, most reports are done before the end of the previous day, and “real-time” usually means the last 5 minutes
To manage the replication process, I would start with a simple console application, expecting it to meet the requirements accordingly. The console application should use the following namespaces (for other reasons, maybe they are available for SQL2012)
using Microsoft.SqlServer.Management.Common; using Microsoft.SqlServer.Management.Smo; using Microsoft.SqlServer.Replication;
Using the console application, you can manage the publication, subscription and any trace tokens in one interface. This will be the PAIN to configure (all these permissions, passwords and paths), but after it starts, you can optimize the transaction database for the data and the report server for ... reports.
I would have a replication topology, which was actually one subscription per table for large tables and one subscription for the rest (search tables, view sp's). I would replicate primary keys, but not restrictions, table references, triggers (relying on db source for integrity). You also do not have to copy indexes - you can manually configure / optimize them for the report server.
You can also choose which articles are suitable for KPI, i.e. (no need to replicate text, varchar (max), etc.)
Listed below are some helper functions to get you going.
Or is there another method that I should pay attention to performance wins?
In my humble experience, there is ALL that can be done to increase productivity. It comes down to saving time → cost →. Sometimes a small compromise in functionality will bring you great performance benefits.
The devil is in the details, but with that caveat ...
Other random thoughts
You have identified one of your infrastructure problems - mixing OLTP and BI / Reporting. I do not understand your experience and how bad your performance problems are, so when replication is definitely the right way, if you are in the "fire fighting" mode, you can try.
- Server-side KPI caching (5min, 1hr, 1day?) In db or RAM
- Using the views associated with the schema, you can create indexed views (in standard and enterprise versions). Depending on the type of KPI, this may even be all you need to do! See http://msdn.microsoft.com/en-us/library/ms191432.aspx to find out more. Essentially, if your KPIs are a sum / group, you should look good.
- One-night preliminary calculation of daily KPIs. Then you can add only current day data.
- Sorting is expensive KPIs due to
order by clauses. Make sure your cluster indexes are correct (REM: they should not exist on the primary key). Try sorting on the client when you have data. - The size of the clustered index. Less is better. Start here if you are using a GUID.
- Separate the data vertically - for example, if you have a table of 200 columns, but KPI use only 10 columns - put 10 in another table - you will get more data to read the I / O page (it will work if your disk is a bottleneck).
- Offer the functionality of "Send reports by e-mail" - take away the character in real time. You may be able to deliver% of the age of the reports overnight when the situation is calmer, and during the day will have a lower volume of reports in real time. Some customers may really prefer this feature.
- Make payment to your customers for reports! "Just enter your credit card information here ..." is the sure way to reduce the number of reports :)
Additional information about your config would be helpful - when you say huge, how huge? How big / what type of drives, what is the RAM specification, etc. - the reason I ask is ... you can spend the next 40 man-days (with a setting of $ 500 / day?) - that would buy you quite a bit of hardware! - more RAM, more disks, faster disks - SSD for temporary or index partitions. Put another way ... you can request too much hardware (and your boss asks you too much)
Next, you describe an enterprise application; these are Enterpise SQL Server licenses. If so, you're in luck - you can create schemas for related kinds of partitions and delegate requests to the "correct" server. There are problems with this model, although, namely, it joins, but it gives you an effective alternative.
Replication code
I knew it was somewhere. Find below some of the helper functions for RMO that you might find useful when starting replication. At some point in the past, it was live code, but probably longer than I would like to think - please treat it as pseudo.
(PS I would be happy to contact you directly if you want)
public static class RMOHelper { public static void PreparePublicationDb(MyServer Src, MyServer Dist) { ReplicationDatabase publicationDb = new ReplicationDatabase(Src.Database, Src.ServerConnection); if (publicationDb.LoadProperties()) { if (!publicationDb.EnabledTransPublishing) { publicationDb.EnabledTransPublishing = true; }