Your considerations are likely to be more than matching bandwidth. For example, you need to support the rules.
But suppose that a static set of rules and messages contains all the fields necessary to execute all the rules. Using SQL, the structure begins with the message table. This table has an insert trigger. The insert trigger will be responsible for compliance. What is the best way to do this?
With 10 messages per second, your processing will be essentially parallel, even if each match is single-threaded. I'm not sure how much effort you need to parallelize a match. Parallelism in databases is usually included in SQL expressions, and not between them.
There are all kinds of solutions. For example, you can encode rules as code in a gigantic stored procedure. It will be a nightmare to maintain, may exceed the length limits of stored procedures, and can be painfully slow.
Another crazy idea. Save the appropriate messages for the rule in the table for this rule, and you have a restriction only on loading those that correspond. Then your process looks like an expression about zillion insertion.
More seriously, you will go further with code, for example:
select * from rules where . . .
In the result set there would be appropriate rules. The where clause might look something like this:
select * from rules r where @wordcount > coalesce(r.wordcount, 0) and @topic = coalesce(r.topic, @topic) and . . .
Thus, any possible comparison for all rules will be in the where clause. And the rules will be pre-processed to determine which articles they need.
You can even refuse external variables and directly address the request:
select * from rules r cross join inserted i where i.wordcount > coalesce(r.wordcount, 0) and i.topic = coalesce(r.topic, @topic) and . . .
So yes, this is possible in SQL. And you can do the matching in parallel. You just need to do the work so that your rules are in a format suitable for comparing databases.