Special SQL Server account creating cheating

I created a stored procedure to implement a speed limit on my API, this is called approximately 5-10k times per second, and every day I notice a trick in the counter table.

enter image description here

It looks at the transmitted API key and then checks the counter table with a combination of identifiers and dates using "UPSERT", and if it finds the result, it executes UPDATE [count] +1, and if not, then INSERT a new line.

There is no primary key in the counter table.

Here is the stored procedure:

USE [omdb] GO /****** Object: StoredProcedure [dbo].[CheckKey] Script Date: 6/17/2017 10:39:37 PM ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[CheckKey] ( @apikey AS VARCHAR(10) ) AS BEGIN SET NOCOUNT ON; DECLARE @userID as int DECLARE @limit as int DECLARE @curCount as int DECLARE @curDate as Date = GETDATE() SELECT @userID = id, @limit = limit FROM [users] WHERE apiKey = @apikey IF @userID IS NULL BEGIN --Key not found SELECT 'False' as [Response], 'Invalid API key!' as [Reason] END ELSE BEGIN --Key found BEGIN TRANSACTION Upsert MERGE [counter] AS t USING (SELECT @userID AS ID) AS s ON t.[ID] = s.[ID] AND t.[date] = @curDate WHEN MATCHED THEN UPDATE SET t.[count] = t.[count]+1 WHEN NOT MATCHED THEN INSERT ([ID], [date], [count]) VALUES (@userID, @curDate, 1); COMMIT TRANSACTION Upsert SELECT @curCount = [count] FROM [counter] WHERE ID = @userID AND [date] = @curDate IF @limit IS NOT NULL AND @curCount > @limit BEGIN SELECT 'False' as [Response], 'Request limit reached!' as [Reason] END ELSE BEGIN SELECT 'True' as [Response], NULL as [Reason] END END END 

I also think that some locks occur after the introduction of this SP.

Errors do not violate anything, but I am curious if this is something fundamentally wrong with my code or if I have to set a restriction in the table to prevent this. Thanks

Update 6/23/17: I reset the MERGE statement and tried to use @@ ROWCOUNT, but also caused errors

 BEGIN TRANSACTION Upsert UPDATE [counter] SET [count] = [count]+1 WHERE [ID] = @userID AND [date] = @curDate IF @@ROWCOUNT = 0 AND @@ERROR = 0 INSERT INTO [counter] ([ID], [date], [count]) VALUES (@userID, @curDate, 1) COMMIT TRANSACTION Upsert 
+8
sql-server stored-procedures counter upsert
source share
4 answers

A HOLDLOCK hint about an update application will help you avoid race conditions. To prevent deadlocks, I suggest a cluster composite primary key (or unique index) on ID and date .

The following example includes these changes and uses the SET <variable> = <column> = <expression> form SET <variable> = <column> = <expression> the SET clause to avoid the need for a subsequent SELECT end counter value and thereby improve performance.

 ALTER PROCEDURE [dbo].[CheckKey] @apikey AS VARCHAR(10) AS SET NOCOUNT ON; --SET XACT_ABORT ON is a best practice for procs with explcit transactions SET XACT_ABORT ON; DECLARE @userID as int , @limit as int , @curCount as int , @curDate as Date = GETDATE(); BEGIN TRY; SELECT @userID = id , @limit = limit FROM [users] WHERE apiKey = @apikey; IF @userID IS NULL BEGIN --Key not found SELECT 'False' as [Response], 'Invalid API key!' as [Reason]; END ELSE BEGIN --Key found BEGIN TRANSACTION Upsert; UPDATE [counter] WITH(HOLDLOCK) SET @curCount = [count] = [count] + 1 WHERE [ID] = @userID AND [date] = @curDate; IF @@ROWCOUNT = 0 BEGIN INSERT INTO [counter] ([ID], [date], [count]) VALUES (@userID, @curDate, 1); END; IF @limit IS NOT NULL AND @curCount > @limit BEGIN SELECT 'False' as [Response], 'Request limit reached!' as [Reason] END ELSE BEGIN SELECT 'True' as [Response], NULL as [Reason] END; COMMIT TRANSACTION Upsert; END; END TRY BEGIN CATCH IF @@TRANCOUNT > 0 ROLLBACK; THROW; END CATCH; GO 
+6
source share

This is probably not the answer you are looking for, but for a speed limit counter, I would use a cache like Redis in middleware before uninstalling the API. In terms of performance, this is pretty good, since Redis has no load problems and your database will not be affected.

And if you want to keep a hit history for every api key per day in SQL, do the daily task of importing yesterday’s counters from Redis into SQL.

The dataset will be small enough to get a Redis instance that will literally cost nothing (or close).

+2
source share

This merger statement goes into a race with itself, i.e. your API is called by one client, and both times the merge operator does not find the string, so it inserts it. The merger is not an atomic operation, although it is reasonable to assume that this is so. For example, see this error report for SQL 2008, for a merge that causes deadlocks, the SQL server team stated that it was by design.

From your post, I think the immediate problem is that your customers will potentially receive a small amount of free hits in your API. For example, if two requests come in and no lines are visible, you start with two lines with a score of 1, when you really need one line with a score of 2, and the client can end up with 1 free API address on that day. If the three requests intersect, you will receive three lines with a score of 1, and they can receive 2 free calls to the API, etc.

Edit

Since your link assumes that you have two categories of parameters that you could study, first try and make it on the SQL server, and secondly, other architectural solutions.

For the SQL option, I would end the merge and consider pre-populating your clients ahead of schedule, at night, or less often for several days at a time, this will leave you with one update instead of merging / updating and pasting. Then you can confirm that your update and your choice are fully optimized, i.e. Have the required index and that they do not cause a scan. Then you can look at the lock lock so that you only capture the row level, see this for more information. You can also take a look at using NOLOCK for selection, which means you can get a little bit of incorrect data, but it doesn’t matter in your case, you will use WHERE, which also targets a single line.

For parameters other than SQL, since your link says that you can look at the queue in the queue, obviously these will be updates / inserts, so your selections will see old data. This may or may not be acceptable depending on how far apart they are, although you could have it as a “final consistent” solution if you wanted to be strict and charge extra or remove API attacks the next day or something like that. You can also see the caching options for storing counters, this will become more complicated if your application is distributed, but there are caching solutions for this. If you set off with caching, you could refuse something, but then you could refuse to download free hits if your site crashed, but you probably would have big problems to worry about this in any case!

+1
source share

At a high level, have you considered the following scenario?

Restructuring. Set the primary key in your table as composite (ID, date). Perhaps even better, just use the API key itself, not the random identifier you assigned to it.

Query A: SQL Server equivalent of "INSERT IGNORE" (there seems to be semantic equivalents for SQL Server based on Google search) with values ​​(ID, TODAY (), 1). You will also want to specify a WHERE clause that checks the identifier does exist in your API / limits table).

Query B: update the row using (ID, TODAY ()) as the main key, setting count: = count + 1 and in the same query, internally join your limit table so that in where you can specify that you only update counter if count <limit.

If most of your requests are valid API requests or speed limited requests, I would execute the requests in the following order for each request:

 Run Query B. If 0 rows updated: Run query A. If 0 rows updated: Run query B. If 0 rows updated, reject because of rate limit. If 1 rows updated, continue. If 1 rows updated: continue. If 1 row updated: continue. 

If most of your requests are incorrect API requests, I would do the following:

 Run query A. If 0 rows updated: Run query B. If 0 rows updated, reject because of rate limit. If 1 rows updated, continue. If 1 rows updated: continue. 
0
source share

All Articles