Why the functions fill (), copy (), reverse () and shuffle () from collections in java are implemented in this way

According to javadoc ... Collections.fill () is written as follows:

public static <T> void fill(List<? super T> list, T obj) { int size = list.size(); if (size < FILL_THRESHOLD || list instanceof RandomAccess) { for (int i=0; i<size; i++) list.set(i, obj); } else { ListIterator<? super T> itr = list.listIterator(); for (int i=0; i<size; i++) { itr.next(); itr.set(obj); } } } 

It’s easy to see why they did not use listIterator for

 if (size < FILL_THRESHOLD || list instanceof RandomAccess) 

with the condition RandomAccess. But what is the use of size < FILL_THRESHOLD in the above?

I mean, is there a significant performance advantage when using iterator for size>=FILL_THRESHOLD and not for size < FILL_THRESHOLD ?

I also see the same approach for Collections.copy ():

  public static <T> void copy(List<? super T> dest, List<? extends T> src) { int srcSize = src.size(); if (srcSize > dest.size()) throw new IndexOutOfBoundsException("Source does not fit in dest"); if (srcSize < COPY_THRESHOLD || (src instanceof RandomAccess && dest instanceof RandomAccess)) { for (int i=0; i<srcSize; i++) dest.set(i, src.get(i)); } else { ListIterator<? super T> di=dest.listIterator(); ListIterator<? extends T> si=src.listIterator(); for (int i=0; i<srcSize; i++) { di.next(); di.set(si.next()); } } } 

FYI:

  private static final int FILL_THRESHOLD = 25; private static final int COPY_THRESHOLD = 10; 

Same approach below:

  public static void reverse(List<?> list) public static void shuffle(List<?> list, Random rnd) 

EDIT:

My confusion is for the size<FILL_THRESHOLD , not for RandomAccess

+7
source share
2 answers

I assume this is because the general list should not be an ArrayList . There is no guarantee that setting a random index is performed at constant O (1) time. At the same time, creating an iterator and working on it has its overhead.

In conclusion, for small lists, you can sacrifice the fact that using an iterator would give lower complexity for accessing sequential elements, while saving the overhead of creating the iterator itself.

This is because list.set(i, obj) may be more expensive than itr.set(obj) , but in the latter case, you will have implicit overhead for managing the iterator. Since complexity can be linear O (n), using list.set(i, obj) for large lists can be effectively a problem.

+3
source

Java 1.2, which presents the structure of the collection, and Java 1.3 took a simple approach with ListIterator:

 public static void fill(List list, Object o) { for (ListIterator i = list.listIterator(); i.hasNext(); ) { i.next(); i.set(o); } } 

Thresholds were introduced along with the RandomAccess marker interface in Java 1.4 (all based on legacy JDK code from the Oracle site)

I think that these are algorithms for optimizing old garbage - these older algorithms had rather heavy fines for creating a new object (for example, ListIterator). Therefore, the use of setters for non-O (1) sheets made sense.

Ironically, Java 1.4.1 introduced a new garbage collector that greatly simplified the creation of objects for short-lived objects, such as an iterator.

+1
source

All Articles