How to faster insert 4 million records from Oracle into an Elasticsearch table using C #?

I have the following code written in C #, but according to this, it will take me 4-5 days to transfer data from Oracle database to Elasticsearch. I insert records in batches of 100. Is there any other way in which the migration of 4 million records is faster (perhaps less than a day, if possible)?

public static void Selection() { for(int i = 1; i < 4000000; i += 1000) { for(int j = i; j < (i+1000); j += 100) { OracleCommand cmd = new OracleCommand(BuildQuery(j), oracle_connection); OracleDataReader reader = cmd.ExecuteReader(); List<Record> list=CreateRecordList(reader); insert(list); } } } private static List<Record> CreateRecordList(OracleDataReader reader) { List<Record> l = new List<Record>(); string[] str = new string[7]; try { while (reader.Read()) { for (int i = 0; i < 7; i++) { str[i] = reader[i].ToString(); } Record r = new Record(str[0], str[1], str[2], str[3], str[4], str[5], str[6]); l.Add(r); } } catch (Exception er) { string msg = er.Message; } return l; } private static string BuildQuery(int from) { int to = from + change - 1; StringBuilder builder = new StringBuilder(); builder.AppendLine(@"select * from"); builder.AppendLine("("); builder.AppendLine("select FIELD_1, FIELD_2, FIELD_3, FIELD_4, FIELD_5, FIELD_6, FIELD_7, "); builder.Append(" row_number() over(order by FIELD_1) rn"); builder.AppendLine(" from tablename"); builder.AppendLine(")"); builder.AppendLine(string.Format("where rn between {0} and {1}", from, to)); builder.AppendLine("order by rn"); return builder.ToString(); } public static void insert(List<Record> l) { try { foreach(Record r in l) client.Index<Record>(r, "index", "type"); } catch (Exception er) { string msg = er.Message; } } 
+5
source share
2 answers

The ROW_NUMBER() function will adversely affect performance, and you run it thousands of times. You are already using OracleDataReader - it will not pull all four million lines to your computer right away, it is basically streaming them one or more at a time.

This should be done in minutes or hours, not days. We have several processes that move millions of records between a Sybase server and SQL in a similar way and take less than five minutes.

Maybe take a picture:

 OracleCommand cmd = new OracleCommand("SELECT ... FROM TableName", oracle_connection); int batchSize = 500; using (OracleDataReader reader = cmd.ExecuteReader()) { List<Record> l = new List<Record>(batchSize); string[] str = new string[7]; int currentRow = 0; while (reader.Read()) { for (int i = 0; i < 7; i++) { str[i] = reader[i].ToString(); } l.Add(new Record(str[0], str[1], str[2], str[3], str[4], str[5], str[6])); // Commit every time batchSize records have been read if (++currentRow == batchSize) { Commit(l); l.Clear(); currentRow = 0; } } // commit remaining records Commit(l); } 

Here Commit might look like this:

 public void Commit(IEnumerable<Record> records) { // TODO: Use ES BULK features, I don't know the exact syntax client.IndexMany<Record>(records, "index", "type"); // client.Bulk(b => b.IndexMany(records))... something like this } 
+4
source

But you don't paste in batches of 100
At the end you insert one at a time
(and this may even be the wrong code to insert one)

 foreach(Record r in l) client.Index<Record>(r, "index", "type"); 

All these read letters do nothing if the insert is one line at a time

You just enter the lag while you get the next batch
Reading is (almost) always faster than writing

 OracleCommand cmd = new OracleCommand(BuildQuery(all), oracle_connection); OracleDataReader reader = cmd.ExecuteReader(); while (reader.Read()) { client.Index<Record>(new Record(reader.GetSting(0), reader.GetSting(1), reader.GetSting(2), reader.GetSting(3), reader.GetSting(4), reader.GetSting(5), reader.GetSting(6), "index", "type"); } reader.Close(); 

You can use BlockingCollection if you want to read and write in parallel. But using the maximum size for reading is not too much ahead of the record

+3
source

All Articles