This is probably a common situation, but I could not find a specific answer to SO or Google.
I have a large table (> 10 million rows) of friendships in the MySQL database, which is very important and needs to be maintained in such a way that there are no duplicate rows. The table stores user files. SQL for table:
CREATE TABLE possiblefriends( id INT NOT NULL AUTO_INCREMENT, PRIMARY KEY(id), user INT, possiblefriend INT)
How the table works, each user has about 1000 or so “possible friends” that are discovered and should be kept, but duplication of “possible friends” should be avoided.
The problem is that due to the design of the program during the day, I need to add 1 million rows or more to the table, which may or may not contain duplicate row entries. The simple answer, it would seem, was to check each row to see if it is a duplicate, and if not, insert it into the table. But this technique is likely to be very slow, as the size of the table increases to 100 million rows, 1 billion rows or higher (which I expect in the near future).
What is the best (i.e. fastest) way to keep this unique table?
I do not need to always have a table with unique values. I just need it once a day for batch jobs. In this case, you should create a separate table that simply inserts all possible rows (containing duplicate rows and all), and then at the end of the day creates a second table that calculates all the unique rows in the first table?
If not, what is the best way for this table to be long-term?
(If indexes are the best long-term solution, tell me which indexes to use)