The best way to improve performance (and somehow switch to another resource)

We have an application on which IIS and SQL are on the same machine. This is a standard windows2003 server, with 4 gigabytes of RAM running on a virtual machine.

Now the number of users is constantly growing. There are also huge statistics that can be performed by users, but very strongly affect the performance for other users. Therefore, we need to somehow improve performance.

I was thinking about separating IIS and SQL on two different machines with a 64-bit version of Windows2008 and at least 6 gigabytes of RAM for each machine, but should also have a fault tolerance solution.

Can you recommend some scenarios to solve the performance problem and crash failure?

thanks

ps:

For information only: we now use inproc state management in IIS, but I think it would be better to switch to sqlstatemanagement.

EDIT

I expanded the question to failure on failure. Since our client does not want to spend too much money on server and SQL licenses. Would it be β€œnormal” to just have replication to the second SQL server and use this as a switch to another resource? Do you know some of the best "cheap" solutions?

The application is intended for internal use only, but now more and more departments are involved in this project.

+7
windows sql iis
source share
6 answers

You now have a 32-bit OS in the virtual machine, which I assume. Since Standard Edition does not allow AWE on two servers (IIS and SQL), SQL Server will load the maximum, it can be about 1.8 GB and leave a large amount of RAM for IIS and OS. But as soon as you switch to the 64-bit OS, everything will change, since SQL Server will take up all the RAM for its buffer pool (~ 5 GB, if 6 GB is available), and then it will start sending it back to the OS when the notification . You can configure this behavior by tuning SQL Server memory settings. By parsing IIS and SQL into separate virtual machines, you leave all the memory in the SQL virtual machine for your buffer pools, which is good. Ideally, you should have enough RAM so that SQL can load the entire database into memory (including tempdb) and only touch the disk for writing the log and when it should check the database checkpoint. In other words, more RAM means faster SQL. This is by far the most important hardware resource SQL requires for performance, and will give the biggest bang for the buck.

Now back to the broad question of "rollback." In SQL Server, high availability solutions are divided into two categories: automatic and manual. For automatic failover, you really only have a few solutions:

  • Clustering Traditionally, it is quite expensive to implement due to the high cost of hardware that supports clustering, but with virtual machines this is a completely different story. The standard version of SQL supports two node clusters. Clustering is a bit complicated to deploy, but it is quite simple to work with and does not require application changes to support. With clustering, the fault tolerance block represents the entire instance of SQL Server (i.e., each database, including master / tempdb / model and msdb, all logins, all SQL agent jobs, etc. etc.). The cluster is not a performance solution, since the backup server just sits idle in the event of a primary failure. You can use the backup virtual machine by deploying the so-called "active-active" cluster. This means that you are deploying two clusters, one active on VM1 and a backup on VM2, the other active on VM2 and stand-by on VM1. In the case of switching to another resource, one of the virtual machines will have to load the load of both instances, and therefore active-active deployments are sometimes unhappy. Considering what you plan to deploy on virtual machines, and not on (expensive) metal, I would recommend against it, since there is no huge cost of "amortization".
  • Mirroring This will maintain a hot backup mirror of your database (not instance). Mirroring is preferable to clustering because of the lower deployment cost (without special equipment), lower switching time (in seconds, as opposed to minutes in clustering) and geolocation capabilities (mirroring supports the distribution of nodes on separate continents, clustering only supports a small distance of one hundred meters between nodes). But since the fault tolerance block is a database, mirroring does not provide ease of use for clustering. Many of the resources needed for the application are not in the database: logins, agent jobs, service plans, database mail messages, etc. Etc. Since only the database fails, the transition to another resource must be carefully planned so that the application continues to work after a failure (for example, logins should be transferred). An application must also be aware of the deployment of mirroring so that it connects correctly . With Standard Edition, you can deploy mirroring in high security mode .
  • Mirroring I will not go into details about this; it requires specialized SAN equipment capable of mirroring the disk level.

If you are considering manual fail-safe solutions, there are several more alternatives:

  • Magazine delivery, Magazine delivery is basically out-of-band mirroring. Instead of transferring real-time log entries over a dedicated TCP connection, the log is transferred using file copy operations. There is very little reason to choose the delivery of mirroring logs: a backup database can be requested for reporting, a backup can be located in a place with sporadic communication, a backup storage can be hosted by a machine with a very low power consumption.

  • Replication

    . It really is not a high availability solution. Replication is a solution for providing copies of data and / or for exchanging data updates between sites. Although this can be used to deploy a kind of high-performance make-shift solution, there are many problems with this and mostly without an advantage. Compared with the delivery and mirroring of logs, it has a number of additional disadvantages, since the fault tolerance unit is not even a database, it is just fragments of data in the database (some of the tables). Metadata, such as user rights and security permissions, does not fail, schema changes must be performed in replication recognition mode , and some changes cannot even be replicated. Under the contract, both mirroring and sending logs provide a backup identical to the production database, which automatically covers any changes made to the database.

You note that you are concerned about license costs: you actually do not need a license for any of the passive servers, any of these technologies, except for replication. Standby servers require a license only if they become active and start the database for more than 30 days.

Given what you plan to deploy to virtual machines, my choice would be clustering. If you are going to turn around for metal, I would recommend mirroring instead because of the cost of clustering hardware.

+2
source share

SQL Server always works best if it is "ONLY" running on the machine. You will have a quick, easy and good gain on this. It seems that he likes to control everything and is always happier when he can :)

+1
source share

It looks like you are really asking if the DB should be placed on a separate machine. This may not improve performance (it will actually decrease as the delay increases), but it will improve scalability (which I guess is what you really need).

Performance <> scalability.

However, more problems come into the game - if you do not have enough RAM performance, it may decrease if the database is on the same server - the SQL server likes to use RAM.

That's why for things like TFS that use an SQL server, for a small number of users, Microsoft recommends that everything be installed on one machine, but for a larger number of users, Microsoft recommends that the database be on another server.

You can read about deployment options for TFS here .

Switching SQL Server state management will not increase performance - most likely, it will reduce it, but you will get other benefits (for example, reliability).

It sounds like you really need to find out what the performance bottleneck is. In my experience, this is usually in the database.

Have you learned standard ASP.NET optimization techniques such as caching? Microsoft provides guidance on setting up the application , which may also be useful to you.

How you use SQL Server in a web application script, if you use SQL Server 2005 and above, you can read about Snapshot Isolation . Sometimes this does not cause performance problems in web applications.

+1
source share

You should refer to these 2 coding articles to tune ASP.Net performance

1. http://www.codeproject.com/KB/aspnet/10ASPNetPerformance.aspx
2. http://www.codeproject.com/KB/aspnet/aspnetPerformance.aspx

I personally implemented these methods in my asp.net applications and received over 30% performance improvement.

In addition, you can refer to this article for a 99.99% uptime for your application.

3. http://www.codeproject.com/KB/aspnet/ProdArch.aspx

+1
source share

Separation of layers can help. Often, setting up a machine for a database is very specific, so a reasonable first effort.

However, if you have two kinds of user actions, one of which is very difficult, you always run the risk of several heavy users harming the rest of the population.

Two things you can consider:

  • Can you take DataWarehouse? Have a second database that is served from the first. A new database where heavy users do their job. Of course, their statistics will be a bit outdated, but this conceptually will always be true - by the time they look at the answers, the world will move.
  • Manage the number of statistics requests that you allow at any given time. Perhaps they were presented as a "job" in line. Run them as a low priority.
0
source share

Naturally, separating IIS and SQL Server is the first step. Sql server really wants to have a complete machine for itself.

Secondly, it is important to analyze the application as it launches. Never try to optimize the performance of your application without real usage data, because you probably just spend time optimizing materials that are rarely called up. One of the methods that I have used with success in the past is to create a System.Diagnostics.Stopwatch in Request_Begin in global.asax and then save it in a context variable

var sw = new Stopwatch(); sw.Start() HttpContext.Current.Items["stopwatch"] = sw; 

In End_ request, you will get an animated stopwatch

 sw = HttpContext.Current.Items["stopwatch"]; sw.Stop(); Timespan ts = sw.Elapsed; 

And then write in the log table how long it took to process the request. Also register a URL (with or without query string parameters) and all sorts of materials to help you analyze performance.

Then you can analyze your application and find which operations take the longest, which are called the most, etc. This will allow you to see if there is one page that is being requested a lot, and usually it takes a long time to complete, this should be the goal of optimization, using any tools that you have for this, both .NET and SQL profilers.

Other things that I usually also register are the IP addresses and user ID for registered users. It also gives me invaluable debbugging tool when errors occur.

The reason to put it in a table rather than being written to a log file is that you can use SQL syntax to filter, group, calculate average time, etc.

0
source share

All Articles