Is a performance / memory advantage a short reversal with downcasting?

I am writing a large-scale application where I try to save as much memory as possible and also improve performance. Thus, when I have a field that, as I know, will have values ​​from 0 to 10 or from -100 to 100, I try to use the data type short instead of int .

What this means for the rest of the code, however, is where I call these functions, I have to dump simple int to short s. For instance:

Method signature

 public void coordinates(short x, short y) ... 

Method call

 obj.coordinates((short) 1, (short) 2); 

This is the same as in all my code, because literals are treated as int and will not be automatically omitted or entered based on function parameters.

As such, is any performance or memory increase really significant after this happens? Or is the conversion process so effective that I can still get some benefits?

+6
source share
5 answers

Lack of performance when using short or int on 32-bit platforms, in all cases, except for short [] compared to int [], and even then the cons usually outweigh the pros.

Assuming you are working on x64, x86 or ARM-32:

  • When used, 16-bit SHORTs are stored in 32-bit or 64-bit integer registers, just like int. That is, when short is used, you get no memory or performance compared to int.
  • When on the stack, 16-bit SHORTs are stored in 32-bit or 64-bit slots to support stack alignment (just like int). You do not get performance or memory benefits when using SHORT and INT for local variables.
  • When passed as parameters, SHORTs automatically expand to 32-bit or 64-bit when they are pushed onto the stack (as opposed to just pressing int). Your code here is actually slightly less performance and has a slightly larger (code) amount of memory than if you were using ints.
  • When storing global (static) variables, these variables automatically expand to occupy 32-bit or 64-bit intervals to ensure alignment of pointers (references). This means that you are not benefiting from performance or memory for using SHORT and INT for global (static) variables.
  • When storing fields, they live in a structure in the heap memory, which is mapped to the class layout. In this class, fields are automatically padded to 32-bit or 64-bit in order to maintain field alignment on the heap. You do not benefit from performance or memory by using SHORT for fields and INT.

The only advantage you'll ever see when using SHORTs or INTs is when you allocate an array of them. In this case, an array of N shorts is approximately half the size of an array of N ints.

Besides the performance benefits of having variables in the hot loop together in the case of complex but localized math in a large array of shorts, you will never see the benefits of using SHORTS and INT.

In all other cases, such as shorts, used for fields, globals, parameters, and locales, except for the number of bits that it can store, there is no difference between SHORT and INT.

My advice, as always, is to recommend that before making your code more difficult to read and more artificially limited, try BENCHMARKING your code to find out where the bottlenecks are in memory and the processor, and then enable them.

I highly suspect that if you ever encounter the fact that your application suffers from using int rather than short, then you have long since dropped Java for fewer heads in memory / processor standby mode, so all this work is pre-directed For nothing.

+10
source

As far as I can tell, there shouldn’t be any runtime expenses when compiling a role (using short instead of int actually improves performance, is controversial and depends on the specifics of your application).

Consider the following:

 public class Main { public static void f(short x, short y) { } public static void main(String args[]) { final short x = 1; final short y = 2; f(x, y); f((short)1, (short)2); } } 

The last two lines of main() compiled into:

  // f(x, y) 4: iconst_1 5: iconst_2 6: invokestatic #21 // Method f:(SS)V // f((short)1, (short)2); 9: iconst_1 10: iconst_2 11: invokestatic #21 // Method f:(SS)V 

As you can see, they are identical. Throws occur during compilation.

+7
source

A listing of type int to short occurs at compile time and does not affect execution performance.

+2
source

You need a way to test the effect of choosing your type on memory usage. If a short value against int in a given situation achieves performance by reducing the amount of memory, the effect on memory should be measurable.

Here is a simple way to measure the amount of memory used:

  private static long inUseMemory() { Runtime rt = Runtime.getRuntime(); rt.gc(); final long memory = rt.totalMemory() - rt.freeMemory(); return memory; } 

I also include an example program that uses this method to test memory usage in some common situations. The increase in memory for allocating an array of a million shorts confirms that short arrays use two bytes per element. An increase in memory for various arrays of objects indicates that changing the type of one or two fields is not much different.

Here is the result of one run. YMMV.

 Before short[1000000] allocation: In use: 162608 Change 162608 After short[1000000] allocation: In use: 2162808 Change 2000200 After TwoShorts[1000000] allocation: In use: 34266200 Change 32103392 After NoShorts[1000000] allocation: In use: 58162560 Change 23896360 After TwoInts[1000000] allocation: In use: 90265920 Change 32103360 Dummy to keep arrays live -378899459 

The rest of this article is the source of the program:

  public class Test { private static int BIG = 1000000; private static long oldMemory = 0; public static void main(String[] args) { short[] megaShort; NoShorts[] megaNoShorts; TwoShorts[] megaTwoShorts; TwoInts[] megaTwoInts; System.out.println("Before short[" + BIG + "] allocation: " + memoryReport()); megaShort = new short[BIG]; System.out .println("After short[" + BIG + "] allocation: " + memoryReport()); megaTwoShorts = new TwoShorts[BIG]; for (int i = 0; i < BIG; i++) { megaTwoShorts[i] = new TwoShorts(); } System.out.println("After TwoShorts[" + BIG + "] allocation: " + memoryReport()); megaNoShorts = new NoShorts[BIG]; for (int i = 0; i < BIG; i++) { megaNoShorts[i] = new NoShorts(); } System.out.println("After NoShorts[" + BIG + "] allocation: " + memoryReport()); megaTwoInts = new TwoInts[BIG]; for (int i = 0; i < BIG; i++) { megaTwoInts[i] = new TwoInts(); } System.out.println("After TwoInts[" + BIG + "] allocation: " + memoryReport()); System.out.println("Dummy to keep arrays live " + (megaShort[0] + megaTwoShorts[0].hashCode() + megaNoShorts[0] .hashCode() + megaTwoInts[0].hashCode())); } private static long inUseMemory() { Runtime rt = Runtime.getRuntime(); rt.gc(); final long memory = rt.totalMemory() - rt.freeMemory(); return memory; } private static String memoryReport() { long newMemory = inUseMemory(); String result = "In use: " + newMemory + " Change " + (newMemory - oldMemory); oldMemory = newMemory; return result; } } class NoShorts { //char a, b, c; } class TwoShorts { //char a, b, c; short s, t; } class TwoInts { //char a, b, c; int s, t; } 
+1
source

First, I want to confirm the memory savings, as I saw some doubts. In the short documentation in the tutorial: http://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html

short : the short data type is a 16-bit two-digit integer. It has a minimum value of -32,768 and a maximum value of 32,767 (inclusive). As in the byte, the same recommendations apply: you can use shorthand to save memory in large arrays, in situations where memory saving actually matters.

Using short you save memory in an array of large arrays (I hope so) , so its a good idea to use it.

Now to your question:

Is a performance / memory advantage a short reversal with downcasting?

The short answer is NO. Listing down from int to short occurs during compilation itself, so it does not affect performance in terms of performance, but since you save memory, it may lead to better performance in memory threshold scenarios .

-1
source

All Articles