Big mysql query in PHP

I have a large table of approximately 14 million rows. Each line contains a block of text. I also have another table with approximately 6,000 rows, and each row has a word and six numeric values โ€‹โ€‹for each word. I need to take each block of text from the first table and find the number of times each word appears in the second table, and then calculate the average of six values โ€‹โ€‹for each block of text and save it.

I have a debian machine with i7 and 8 GB of memory that should be able to handle it. At the moment I am using the php substr_count () function. However, PHP simply does not want this to be the right solution for this problem. Does anyone have a better way to do this other than working with time and memory constraints? Can I use only SQL? If not for the best way to execute my PHP without server overload?

+4
source share
3 answers

Make each entry from the "big" table one by one. Download this single block into your program (php or something else), do a search and calculation, and then save the appropriate values โ€‹โ€‹when you need them.

Make each record your own transaction, unlike the others. If you are interrupted, use the saved values โ€‹โ€‹to determine where to start again.

Once you finish existing recordings, you only need to do this in the future when you enter or update a recording, so itโ€™s much easier. You just need to do your big bite right now to update the data.

+3
source

What are you trying to do this? If you are trying to create something like a search engine with a weighting function, you might want to abandon it, and instead use the MySQL full-text search functions and the indexes you have. If you still need to have this specific solution, you can, of course, do it completely in SQL. You can do this in a single request or using a trigger that runs every time a row is inserted or updated. You cannot do this correctly with PHP without jumping over a lot of hoops.

To give you a specific answer, we really need more information about queries, data structures, and what you are trying to do.

+1
source

IT Redesign ()

If there is no size for the disk! itโ€™s important to simply assemble the table into one


A table with 6000 is stored in memory [memory table] and backs up every one hour

INSERT IGNORE in back.table SELECT * FROM my.table;


Create your own index in a large eq table

Add column โ€œname indexโ€ to large table with row id

- You need more information about the request to find a solution.

0
source

All Articles