Is explicit native memory faster than a bunch?

I am studying options to help my high-performance application, and at the same time I came across Terracotta BigMemory . From what I collect, they use a garbage-independent "native memory" assembly and are apparently about 10 times slower than heap storage due to serialization / deserialization problems. Before reading about BigMemory, I had never heard of “native memory” outside of regular JNI. Although BigMemory is an interesting option that needs further consideration, I am intrigued by what can be done using the built-in memory if the serialization problem can be circumvented.

Is native Java memory faster (I think it means ByteBufferobjects?) Than traditional heap memory when there are no problems with serialization (for example, if I compare it with huge byte[])? Or the vagaries of garbage collection, etc. Make this question irrefutable? I know that “measuring this” is a common answer, but I’m afraid that I won’t install a test test, because I still don’t know enough about how the built-in memory works in Java.

+5
source share
3 answers

Direct memory is faster when performing I / O because it does not allow data to be copied. However, for 95% of the applications, you will not notice the difference.

, , POJO . ( ). GC, ( ) , . , .


Java ( , ByteBuffer?), , (, [])?

, [], int, / , . , POJO, .

.. ?

GC. GC .

BTW: , , Eden, , , . .

+4

BigMemory , , , . , GC CPU. " ", Java , GC , , GC . , GC , , , GC , JVM . , JVM, , " " 1,5 . , , , GC 50% , . GC, JVM.

BigMemory, , . , C, , HashMap. , . , Terracotta ByteBuffer, - Java.

, BigMemory GC: http://www.terracotta.org/resources/whitepapers/bigmemory-whitepaper.

+2

, , .

, . AFAIK, , . , , , BigMemory, / .

, . :

  • , , . ( .)

  • Be prepared for some intrusive changes in your application if the data involved is not yet managed using the cache.

  • Be prepared to spend some time (reconfiguring) the caching code to get good performance with BigMemory.

  • If your data structures are complex, expect proportionally large run-time overheads and tuning efforts.

+1
source

All Articles