The array under the hood is a contiguous block of memory. Depending on what you initialize it for, it can be relatively small or relatively large.
Say, for example, I have an array of ten elements.
int[] arr = new int[10];
At the core of the JVM implementation, you now need to request the OS for 40 contiguous bytes for the program. OS commits, and now you have 40 bytes that you can use with the familiar name arr .
Note that this array probably uses the space on either side of it - there are other links or a bit of information next to it, and it cannot just go to the eleventh position of itself and “require” it.
Let's say we decided 10 is too short. We need to make it bigger - ten times.
int arr2 = new int[100];
Now the OS should find 400 bytes of space that are next to each other in memory, which may or may not be trivial, given the life cycle of objects, runtime garbage collection, etc.
Array scaling is not just about moving links to multiple memory locations — it's about finding new blocks of contiguous memory to store data.
You mentioned ArrayList - its curious that it is supported by an array that automatically resizes. Well, there is a trick for this resize operation - it's expensive.
public boolean add(E e) { ensureCapacityInternal(size + 1);
That ensureCapacityInternal does some interesting things ... ultimately calls ensureExplicitCapacity ... which ultimately calls grow :
private void grow(int minCapacity) {
Essentially, every time it needs to be resized, it allocates a space equal to 1.5 times the original array. This greatly speeds up the expensive , if the ArrayList significantly large - the system must go out and find more and more contiguous memory for allocation, that is, the JVM must find some more space that is contiguous, which means more time spent garbage collection, and ultimately means less performance.
And the above does not even apply to copying data.