I have a three-column table with just over 6 billion rows in SQL Server 2008 R2.
We request it every day to create graphical system analysis charts for our customers. I did not notice any database performance performance (although the fact that it grows ~ 1 GB every day makes backup management more active than we would like).
Update July 2016

We did this up to ~ 24.5 billion lines , until the backups became large enough so that we could truncate records older than two years (~ 700 GB stored in several backups, including on expensive tapes). It is worth noting that performance was not a significant incentive in this decision (i.e., it still worked perfectly).
For anyone trying to remove 20 billion rows from SQL Server, I highly recommend this article . The corresponding code in case the link freezes (read the article for a full explanation):
ALTER DATABASE DeleteRecord SET RECOVERY SIMPLE; GO BEGIN TRY BEGIN TRANSACTION
November 2016 update
If you plan to store this data in one table, do not do this. I highly recommend that you consider table partitioning (either manually or with built-in functions if you are using the Enterprise edition). This reduces old data as easily as trimming a table once (week / month / etc.). If you don’t have Enterprise (which we don’t have), you can simply write a script that runs once a month, discards tables older than 2 years, creates a table for the next month and restores a dynamic view that joins all partition tables to simplify queries. Obviously, "once a month" and "older than 2 years" should be determined by you based on what makes sense for your use case. Deleting directly from a table with tens of billions of rows of data will: a) take a HUGE amount of time and b) fill up the transaction log hundreds or thousands of times.
Dan Mar 04 '14 at 13:48 2014-03-04 13:48
source share