The best way to dump a huge file into the MarkLogic database

I am new to MarkLogic and evaluate it to dump huge csv / text data with some transformations, such as a filter based on certain conditions, etc. As far as I know, I can dump data using 2 methods.

1) Using java api for MarkLogic in a multi-threaded environment.
2) MLCP with custom transformation.

I would like to know which of the best ways to achieve this? Or if there are others that I don’t know about.

Thanks in advance.

+4
source share
1 answer

Both of the ways you talked about will work. One of them is easier to implement, but you can get better performance from the other.

MLCP . MLCP , CSV XML JSON. XML- JSON- . . :

1, -batch_size .

, . URI , -fastload.

Java API , CSV ( , ), . , (, , , ), , , .

, MLCP, ( ). , , , Java.

, , MarkLogic Server.

+4

All Articles